Total regression magnitude across all 51 features: 1047 percentage points of lost progress.
Regressions by platform area:
| Area | Count | Avg Regression |
|---|---|---|
| webassembly | 4 | -29.7pp |
| html | 4 | -25.6pp |
| api | 23 | -19.3pp |
| css | 20 | -19.1pp |
Worst 15 regressions:
| Feature | Was | Now | Delta |
|---|---|---|---|
| base64encodedecode | 100% | 25% | -75.0pp |
| object-fit | 97% | 26% | -70.9pp |
| webvtt | 63% | 6% | -56.4pp |
| empty | 100% | 50% | -50.0pp |
| input-submit | 100% | 50% | -50.0pp |
| supports-compat | 100% | 50% | -50.0pp |
| wasm-multi-value | 67% | 22% | -44.4pp |
| wasm-exception-handling | 54% | 21% | -33.1pp |
| console | 55% | 23% | -32.0pp |
| before-after | 100% | 69% | -30.6pp |
| svg | 54% | 24% | -29.9pp |
| tab-size | 33% | 5% | -28.8pp |
| input-checkbox | 80% | 52% | -27.9pp |
| figure | 100% | 75% | -25.0pp |
| wasm | 45% | 22% | -23.6pp |
Regressions often indicate code removals, refactors that broke existing functionality, or dependency changes. These are typically cheaper to fix than building new features from scratch — the code existed once and often just needs targeted repairs.
60 actively improving — on track to reach 95%:
| Feature | Score | Velocity |
|---|---|---|
| vertical-align | 94.8% | +3.30pp/q |
| font-size | 94.6% | +0.57pp/q |
| constraint-validation | 94.4% | +0.21pp/q |
| trig-functions | 94.3% | +6.10pp/q |
| url | 94.1% | +0.37pp/q |
| textarea | 93.3% | +2.01pp/q |
| webgl | 93.0% | +1.32pp/q |
| oklab | 92.9% | +9.29pp/q |
| calc | 92.6% | +1.05pp/q |
| localstorage | 91.7% | +0.11pp/q |
| import | 91.7% | +0.29pp/q |
| text-encoding | 91.2% | +1.00pp/q |
| ...and 48 more | ||
20 stalled — so close yet stuck:
| Feature | Score |
|---|---|
| min-max-clamp | 93.0% |
| supports | 92.7% |
| unset-value | 91.7% |
| dataset | 91.1% |
| background-clip | 90.3% |
| not | 90.2% |
| css-escape | 90.0% |
| css-supports | 88.0% |
| dirname | 87.5% |
| input-date-time | 86.4% |
| channel-messaging | 84.0% |
| list-elements | 83.9% |
| currentcolor | 83.3% |
| min-max-width-height | 83.3% |
| prefers-color-scheme | 83.3% |
| shadow-parts | 82.0% |
| gradients | 81.5% |
| border-radius | 80.3% |
| base | 80.3% |
| font-weight | 80.2% |
Near-complete features need only a small number of failing subtests fixed to cross the 95% threshold. The stalled ones are the highest-ROI investment: small effort, large impact on the headline readiness number.
Quarterly progress:
| Quarter | Features Improved | Regressed | Crossed 95% |
|---|---|---|---|
| 2024-Q1 | 93 | 48 | 7 |
| 2024-Q3 | 112 | 53 | 14 |
| 2025-Q1 | 110 | 60 | 13 |
| 2025-Q3 | 127 | 41 | 12 |
| 2026-Q1 | 153 | 47 | 13 |
Score distribution shift:
| Bucket | 2023-Q3 | 2026-Q1 | Change |
|---|---|---|---|
| 0-20% | 113 | 64 | -49 |
| 20-50% | 103 | 86 | -17 |
| 50-80% | 113 | 122 | +9 |
| 80-95% | 50 | 80 | +30 |
| 95-100% | 47 | 87 | +40 |
No data: 167 → 154
The improving trend is driven by growing contributor engagement and focused work on layout (CSS Grid, Flexbox) and DOM APIs. Recent quarters show more features crossing the 95% threshold, indicating the project is moving from broad partial support to deep per-feature completion.
Stalled features by area:
| Area | Count | % of Stalled |
|---|---|---|
| api | 68 | 48% |
| css | 58 | 41% |
| html | 7 | 5% |
| webassembly | 5 | 4% |
| unknown | 2 | 1% |
| http | 1 | 1% |
Highest-scoring stalled features (closest to completion):
| Feature | Score | Area |
|---|---|---|
| min-max-clamp | 93.0% | css |
| supports | 92.7% | api |
| unset-value | 91.7% | css |
| dataset | 91.1% | api |
| background-clip | 90.3% | css |
| not | 90.2% | css |
| css-escape | 90.0% | api |
| css-supports | 88.0% | api |
| dirname | 87.5% | api |
| input-date-time | 86.4% | api |
| channel-messaging | 84.0% | api |
| list-elements | 83.9% | api |
| currentcolor | 83.3% | css |
| min-max-width-height | 83.3% | css |
| prefers-color-scheme | 83.3% | css |
| ...and 126 more | ||
Stalled features have zero or negative velocity over the entire 2.5-year measurement period. Adding more contributors to already-improving features won't help here — these need dedicated attention, possibly architectural work or new subsystem implementations. Unblocking even 20-30 of the highest-scoring stalled features would significantly shift the projection curves.
No-data features by category:
| Category | Count | Likely supported? |
|---|---|---|
| JS built-ins (SpiderMonkey) | 78 | Yes — tested by test262 |
| Semantic HTML elements | 23 | Yes — trivial elements |
| WebGL extensions | 20 | Depends on GPU driver |
| Basic DOM interfaces | 11 | Likely |
| DOM APIs | 8 | Varies |
| CSS features | 6 | Varies |
| WebAssembly features | 3 | Likely |
| HTTP features | 3 | Varies |
| Other (media types, etc.) | 2 | Varies |
Why no data?
Web Features — The web-features package defines 1,129 curated feature groups that map to Browser Compat Data (BCD) keys. Each feature has a Baseline status: Widely Available (high) means the feature has been supported across all core browsers for 30+ months. This analysis focuses on the 593 Widely Available features as the target set.
WPT Feature Manifest — The Web Platform Tests project maintains WEB_FEATURES_MANIFEST.json, which maps 841 web-feature IDs to 53,549 WPT test paths. This is the critical bridge from feature definitions to measurable test outcomes.
WPT Results — Servo runs the full WPT suite daily on wpt.fyi. We use the summary_v2.json format, which records per-test status and subtest counts. Historical snapshots at 6 quarterly intervals (2023-Q3 through 2026-Q1) provide the longitudinal data.
Each feature's score is the average per-test pass rate across all matched WPT tests. For tests with subtests, the score is passed_subtests / total_subtests. For simple pass/fail tests, the score is 0 or 1. A feature is considered fully supported at ≥95% score.
Features with no WPT manifest mapping (154 of 593) are categorized as “No Data.” These break down into 78 JS built-ins (supported via SpiderMonkey, tested by test262 not WPT), 23 semantic HTML elements (trivial — no special behavior to test), 20 WebGL extensions (GPU-driver-dependent), and 33 other features (basic DOM, CSS, HTTP, WebAssembly). The 78 JS built-ins and 23 semantic HTML elements are supported via SpiderMonkey and the HTML parser respectively; the 20 WebGL extensions depend on GPU driver support; the remaining 33 have unknown status.
The contributor count uses full-time equivalents (FTE), not raw headcount. Computed from per-author commit counts over Jul 2025 – Jan 2026 (7 months), excluding bots (dependabot, WPT Sync). Each author's FTE fraction = min(their_commits / 154, 1.0), where 154 = 22 commits/month × 7 months (~1 commit per working day). The sum across all 115 human authors yields ~13 FTE. Of these, 9 core contributors operate at ≥50% FTE, with a long tail of occasional contributors. This FTE figure is what drives the projection and cost models — it represents the effective engineering capacity, which is what matters for funding decisions.
Per-feature velocity is computed as the improvement per quarter from 2023-Q3 to present. We use max(overall_velocity, recent_velocity) where recent velocity covers the last 4 quarters, to account for accelerating progress. Features with velocity ≤0.001 percentage points/quarter are classified as stalled.
Projections extrapolate each feature individually: quarters_to_95% = (0.95 - current_score) / (velocity × scale_factor). The scale factor models additional FTEs using a sublinear power law: scale = (new_fte / current_fte)exponent. The default exponent of 0.7 reflects Brooks's Law — communication overhead means doubling contributors doesn't double output.
Cost is calculated as: additional_FTE × salary × years_to_milestone, where the milestone date is determined by the Nth-percentile feature reaching 95%. This represents the marginal cost of accelerating beyond the current trajectory — not the total program cost. The model assumes FTEs can be allocated to stalled features and that the scaling exponent applies uniformly.
The default salary of €200k/yr is composed of a €150k base (European senior software engineer median total cost to employer) × a 1.33× specialization multiplier for two factors:
Reference rates used to calibrate this figure:
| Source | Annual Equivalent | Notes |
|---|---|---|
| NLnet grants (up to €65/hr) | €117k | Below-market cost-recovery rate |
| Sovereign Tech Fund (employment + social) | €79–101k | German public-sector TVöD scale |
| European senior SWE median (total cost) | €100–140k | Varies by country; incl. employer costs |
| Mozilla Germany senior SWE (Levels.fyi) | €145–164k | Top-tier browser-engine employer |
| Model default (€150k × 1.33) | €200k | Specialized + multi-disciplinary premium |
| Servo grant ($150/hr × 1800) | €248k | US contractor rate with overhead |
Thanks to the people who’ve reviewed the methodology and approach so far. NOTE: AI was used in the making of this draft report. For suggestions for improvement or to report inaccuracies, contact dietrich@webtransitions.org.