Web Feature Usage & Prioritization Report#
A standalone, engine-agnostic dataset and report that prioritizes web platform features by real-world usage. Useful for any browser engine project deciding what to implement next.
Quick start#
npm run usage-json # → data/web-feature-usage.json
npm run usage-report # → web-feature-usage.html
Data sources#
All sources are treated as parallel signals — no waterfall or hierarchy. Each source tells a different part of the story.
| Source | Internal key | Weight | What it measures | Denominator | Features |
|---|---|---|---|---|---|
| HA custom_metrics (specific) | ha_custom_metrics |
0.35 | CSS properties, HTML elements, selectors, at-rules via static page analysis | Fraction of ~10.9M crawled pages (site-weighted) | ~107 |
| HA custom_metrics (generic) | ha_custom_metrics_generic |
0.10 | Same as above, but observable is a common parent property (e.g., "display" for display:flex). Informative upper bound, noisy. | Same | ~61 |
| HA blink_features | ha_blink_features |
0.20 | Blink UseCounter data via HTTP Archive BigQuery mirror. Covers JS API interface instantiation. | Same | ~44 |
| ChromeStatus | chromestatus |
0.15 | Chrome telemetry (day_percentage) aggregated across all Chrome page loads globally |
All Chrome page loads (traffic-weighted) | ~383 |
| Firefox desktop | firefox_desktop |
0.15 | Firefox desktop use-counter telemetry (CSS properties, JS APIs) | All Firefox desktop page loads (traffic-weighted) | ~82 |
| Firefox Fenix | firefox_fenix |
0.15 | Firefox Fenix/Android use-counter telemetry — only mobile signal available | All Fenix page loads (traffic-weighted) | ~81 |
Total: 383 of 415 Baseline Widely Available (BWA) features have usage data from at least one source. 32 features have no data.
Weighted composite scoring methodology#
Philosophy#
- Keep all features in the list — never drop a feature, never drop a data point
- All sources are parallel signals, not a hierarchy — each tells a different part of the story
- Firefox is equally weighted to Chrome — it's an independent browser signal, and Fenix provides our only mobile data
- Annotate confidence, don't filter — experts can evaluate the backing data themselves
- The composite score is one view among many, not the final word
How weights work#
Weights are assigned by measurement class, not per-browser:
- Content analysis (specific): HA custom_metrics with specific observable = 0.35. Ground truth — the feature is actually on the page. Site-weighted means every site counts equally.
- Content analysis (generic): HA custom_metrics with generic observable = 0.10. Informative upper bound but noisy — heavily discounted vs specific.
- Chrome runtime (site-weighted): HA blink_features = 0.20. Runtime detection on crawled pages, same denominator as HA. Discounted for detection artifacts.
- Chrome telemetry (traffic-weighted): ChromeStatus = 0.15. Real user telemetry, but traffic-weighted (high-traffic sites dominate).
- Firefox telemetry (desktop, traffic-weighted): Firefox desktop = 0.15. Independent browser signal.
- Firefox telemetry (mobile, traffic-weighted): Firefox Fenix = 0.15. Only mobile signal available. Unique value.
Weight redistribution#
When a source is missing for a feature, its weight is redistributed proportionally among available sources. Example: if a feature only has ChromeStatus + Firefox desktop, the effective weights become ChromeStatus 0.50 and Firefox desktop 0.50.
Blink artifact discount#
Blink features values >45% from canvas/webgl/media-related interfaces receive a 0.5 discount multiplier to account for suspected detection artifacts (suspicious clustering around ~53.5%).
Generic observable recovery#
Features previously dropped by the specificity filter (e.g., subgrid via grid-template-columns) are now included with lower weight (0.10 vs 0.35) and flagged as generic_observable_only. The value represents an informative upper bound.
Confidence levels#
| Level | Criteria |
|---|---|
| high | 3+ sources with data, max pairwise delta < 20% absolute |
| medium | 2 sources, OR 3+ sources with delta > 20% |
| low | 1 source only |
| none | No data from any source |
Flags#
Non-exclusive flags that annotate data quality:
| Flag | Meaning |
|---|---|
generic_observable_only |
Only HA signal is a generic parent property (e.g., "display" for display:flex) |
blink_artifact_suspected |
Blink features value is in suspicious cluster (>45%, canvas/webgl/media interfaces), discounted 50% |
single_source |
Only one source has data |
chrome_firefox_divergent |
Chrome and Firefox desktop disagree by >15% absolute |
mobile_divergent |
Fenix and desktop Firefox disagree by >10% absolute |
How Firefox use-counter data is collected#
API endpoints (public, no auth required):
- Desktop:
https://public-data.telemetry.mozilla.org/api/v1/tables/firefox_desktop_derived/firefox_desktop_use_counters/v2/files - Fenix (Android):
https://public-data.telemetry.mozilla.org/api/v1/tables/fenix_derived/fenix_use_counters/v2/files
Each endpoint returns a list of JSON file URLs (one per day). Each file contains records keyed by (submission_date, version_major, country, metric) with pre-computed rate and raw cnt fields.
Metric naming conventions:
- CSS:
use.counter.css.page.css_grid_template_columns→ strip prefix → stripcss_→ replace_with-→grid-template-columns→ BCDcss.properties.grid-template-columns - API:
use.counter.page.window_intersectionobserver→ strip prefix → stripwindow_→ case-match to known BCD interface →IntersectionObserver→ BCDapi.IntersectionObserver
Aggregation: Records are per-(country, version). Global rate = sum(cnt) / sum(use_counter_top_level_content_documents_destroyed) across all countries and versions.
Coverage: ~682 CSS metrics and ~367 API metrics in Firefox telemetry. After mapping to web-features BCD keys, ~82 BWA features have data. CSS mapping is mechanical and reliable. API mapping requires case normalization (lowercase → PascalCase) which is lossy — some APIs don't match.
Limitations:
- Firefox-only user base (~3-4% global market share)
- Traffic-weighted like ChromeStatus (not site-weighted like HTTP Archive)
- No HTML element counters, no CSS selectors, no CSS at-rules
- Fenix files are very large (~765MB uncompressed) — collected via streaming
node data/collect-firefox.mjs # collect latest data
node data/collect-firefox.mjs --dry-run # show file info, no download
node data/collect-firefox.mjs --reprocess # reprocess cached desktop data
node data/collect-firefox.mjs --desktop-only # skip Fenix data
How chromestatus data maps to web-features#
The mapping pipeline:
chromestatus.com/data/webfeaturepopularity
→ [{property_name: "Flexbox", day_percentage: 0.826, ...}]
│
▼
AreWeBrowserYet/servo-bcd-scripts/feature-tag.ts
1. Converts property_name to kebab-case (e.g. "Canvas2d" → "canvas-2d")
2. Matches against the `web-features` npm package, which defines
canonical feature-id → BCD key mappings
3. Attaches BCD collector results (Servo pass/fail per BCD key)
4. Writes popularityBcdMap.json
│
▼ (copied into this repo)
data/collector-results/popularityBcdMap.json
{ "flexbox": { day_percentage: 0.826, bcd_entries: [...] },
"avif": null, ← no chromestatus BCD association
... }
│
├──▶ data/build-httparchive-mapping.mjs
│ Maps BCD keys → HTTP Archive observable paths
│ Generates BigQuery SQL for the HA crawl queries
│ Writes: httparchive-mapping.json
│ │
│ ▼
│ data/collect-httparchive.mjs (npm run collect → BigQuery)
│ │
│ ▼
│ data/httparchive-results/combined-usage.json
│
└──▶ [all sources merged as parallel signals]
│
▼
data/build-usage-json.mjs → data/web-feature-usage.json
data/build-usage-report.mjs → web-feature-usage.html
data/integrate-usage.mjs → patches index.html
Firefox use-counter pipeline (independent, parallel signal):
Mozilla public telemetry API
→ data/collect-firefox.mjs (node data/collect-firefox.mjs)
Downloads desktop + Fenix JSON files
Streams large files, aggregates per-metric rates
Maps metric names → BCD keys → web-feature IDs
│
▼
data/firefox-results/combined-usage.json
│
▼ (loaded by build-usage-json.mjs)
Desktop and Fenix treated as separate parallel signals
(firefox_desktop and firefox_fenix in output)
The join is name-based: chromestatus property_name → kebab-case normalization → lookup against web-features package IDs.
Mapping quality & coverage gaps#
32 features with no usage data from any source#
Draft features (7) — not yet shipped, expected gap:
draft_crash-report-storage-apiinitialize,draft_meta-text-scale,draft_reference-target,draft_wasm-branch-hinting,draft_wasm-custom-descriptors,draft_web-app-manifest-update,draft_web-install-api
Obsolete features (13) — retired chromestatus groupings, expected gap:
obsolete_canvas-2d,obsolete_canvas-alpha,obsolete_canvas-color-management,obsolete_canvas-desynchronized,obsolete_canvas-element,obsolete_canvas-fill-text,obsolete_canvas-measure-text,obsolete_canvas-text-baselines,obsolete_element-check-visibility,obsolete_hidden-until-found-attribute,obsolete_locale-info-obsoleted-getters,obsolete_popover,obsolete_will-read-frequently
Regular features with no match (12) — the substantive gap:
avif,webp,http3— high-usage features; chromestatus likely tracks these under different IDs or has no BCD associations for themjpegxl— removed from Chrome; chromestatus entry may lack BCDintersection-observer-v2— v1 matched; v2 may not have a distinct chromestatus entryview-transitions-element-scoped,uint8-array-base64-hex,float16-array— newer features, may not have BCD associations in chromestatus yetfunction— ambiguous name, unlikely to map to any single chromestatus featureprompt— generic name;beforeinstallpromptmatched separatelytext-detect,translation-api— experimental/new APIs
Output format#
data/web-feature-usage.json:
{
"generated": "2026-02-26T...",
"description": "Web feature usage with weighted composite scoring...",
"sources": {
"ha_custom_metrics": { "weight": 0.35, "feature_count": 107 },
"ha_custom_metrics_generic": { "weight": 0.10, "feature_count": 61 },
"ha_blink_features": { "weight": 0.20, "feature_count": 44 },
"chromestatus": { "weight": 0.15, "feature_count": 383 },
"firefox_desktop": { "weight": 0.15, "feature_count": 82 },
"firefox_fenix": { "weight": 0.15, "feature_count": 81 }
},
"methodology": { "weighting": "...", "confidence_levels": {...}, "flags": {...} },
"summary": {
"total_features": 415,
"with_composite_score": 383,
"without_data": 32,
"tiers": { "above_50_pct": 20, "10_to_50_pct": 103, ... },
"by_confidence": { "high": 67, "medium": 150, "low": 166, "none": 32 },
"cross_validation_pairs": 83
},
"features": [
{
"name": "flexbox",
"signals": {
"ha_custom_metrics": { "pct": 0.877, "observable": "align-items", "type": "css_property" },
"ha_blink_features": null,
"chromestatus": { "pct": 0.826 },
"firefox_desktop": { "pct": 0.584, "observable": "css.properties.justify-content" },
"firefox_fenix": { "pct": 0.743, "observable": "css.properties.justify-content" }
},
"recovered": {
"ha_generic_observable": null
},
"composite": {
"score": 0.787,
"confidence": "medium",
"source_count": 4,
"flags": ["chrome_firefox_divergent", "mobile_divergent"]
},
"bcd_key_count": 15,
"primary_type": "css_property"
}
],
"cross_validation": [
{
"name": "remote-playback",
"chrome_value": 0.473,
"chrome_source": "chromestatus",
"firefox_desktop": 0.006,
"firefox_fenix": 0.004,
"delta": 0.467,
"agreement": "strongly_diverge"
}
]
}
Regenerating#
To update the data:
- Get fresh
popularityBcdMap.jsonfrom the AreWeBrowserYet collector pipeline - Run HTTP Archive queries:
npm run collect(requires BigQuery credentials) - Collect Firefox data:
npm run collect:firefox(no credentials needed, downloads ~870MB) - Reprocess HA results (if re-running without new queries):
npm run collect:reprocess - Rebuild outputs:
npm run usage-json && npm run usage-report