UXET#
UX Evaluation Tool — a browser-based framework for running usability tests against same-origin web apps with webcam eye-tracking, interaction capture, heatmap generation, and deterministic offline analysis.
What it does#
- Load a configured same-origin test app into an iframe.
- Calibrate eye-tracking via webcam (powered by WebGazer).
- Record a session — gaze data, mouse movement, clicks, keypresses, and scroll events are all captured while the user completes a task.
- Debrief — when the task is done, either by app
postMessagecompletion or manual stop, UXET renders per-screen gaze heatmaps, v3 ranked findings, element-level issues, data-quality warnings, and interaction stats. - Export / Import session data as JSON so UXET can re-analyze prior sessions offline, including temporary cohort comparison from multiple imported files or repeated live runs of the same app/task.
Getting started#
UXET is a static site — no build step, no npm install. You just need a local HTTP server.
Start the server#
./serve.sh
This runs:
python3 -m http.server 8080
You can also choose a different port:
./serve.sh 9000
Or via an environment variable:
UXET_PORT=9000 ./serve.sh
Once the server is up, open the URL in your browser, usually http://127.0.0.1:8080.
Requires:
python3for the local static server. Real eye tracking also requires a modern browser with webcam permission and network access to the WebGazer/html2canvas CDNs loaded byindex.html.
Run a test#
- Pick one of the configured apps from the dropdown.
- Click Load App — this launches eye-tracking calibration.
- Follow the calibration prompts (look at each dot and click it).
- When calibration passes, click Start Test to begin recording.
- Complete the task. The session ends automatically when the iframe app sends
UXET_TASK_COMPLETE, or press Shift+Escape to end it manually. - Review the debrief screen — heatmaps, timing, click counts, fixation stats, ranked findings, element findings, and confidence warnings.
- Click Test Again to rerun the same app from the debrief screen, or Export Data to download the session as JSON.
When you use Test Again on the same app, UXET keeps the completed live runs in memory and adds comparison insights after the second run. The comparison highlights repeated findings, repeated element-level patterns, and outlier sessions for that app/task until you reset or reload.
Import a prior session#
- Click Import Data.
- Select a UXET JSON export.
- UXET validates the file and renders the same debrief pipeline used for live sessions.
To compare sessions, select multiple UXET JSON exports in the import dialog. UXET analyzes the files in memory, reports repeated findings, element patterns, outlier sessions, and data-quality warnings, then discards the cohort when you reset or reload. Exporting from cohort mode downloads the cohort analysis rather than a single session artifact.
Legacy exports are still supported, but they may lack screenshots, dense mouse traces, or element snapshots. In those cases UXET shows fidelity warnings and limits the analysis accordingly.
Debug mode#
Expand Debug Controls on the setup screen to:
- Skip Calibration — bypass eye-tracking calibration entirely.
- Use mouse as gaze — substitute mouse position for eye-tracking (useful for development and demos without a webcam).
- End Test — force-stop a running session.
Export schema#
Session exports use schemaVersion: "3" and include:
- full
screenRecordswith screenshots, gaze points, interaction events, and element snapshots - summary
screensfor compatibility with older UXET exports mouseTracesampled during recordingfixationscalibrationdebugmetadata, including whether calibration was skipped or mouse-derived gaze was usedanalysisContext- recomputed deterministic
analysiswith ranked findings, confidence, screen metrics, and element metrics
Older UXET JSON files can still be imported, but they only support a subset of the richer analysis. UXET recomputes analysis on import rather than trusting stale embedded reports.
Analytics engine v3#
The v3 engine is deterministic and offline. It computes attention friction, interaction friction, data coverage, screen metrics, element metrics, and task-agnostic pre-action exploration metrics, then ranks evidence-bound findings by severity, confidence, recurrence, affected time, and data quality.
UXET does not infer the correct task target from task text. It focuses on observable behavior: time to first action, how many regions or elements were inspected before action, scroll depth before action, repeated clicks, and post-action feedback patterns. Fast first actions suppress speculative friction claims unless there is clear repeated interaction failure.
Element-level findings are generated when exports include element snapshots or click fingerprints. Future recordings capture more clickable/card-like elements as areas of interest, including [data-id], [onclick], [tabindex], and pointer-cursor elements. If element snapshots are missing, UXET falls back to spatial zone metrics and shows a confidence warning.
Test status#
There is currently no committed automated test suite or build step in this repository. Validate changes by running the static server, completing at least one live or mouse-gaze session, importing/exporting JSON, and checking the browser console for runtime errors.
Adding your own apps#
UXET discovers apps dynamically from folders under testable-apps/. Each app folder must include:
index.htmlfor the app UIapp.jsonwith the app name and task
Example testable-apps/your-app/app.json:
{
"name": "Your App Name",
"task": "Describe the task for the user"
}
UXET always uses the standardized automatic win condition: postMessage. Do not include win-condition metadata in index.html or app.json. When the task is complete, the app inside the iframe should send:
window.parent.postMessage({ type: 'UXET_TASK_COMPLETE' }, '*');
This keeps UXET app integration task-specific inside the app while giving UXET only one completion signal to listen for.
Because UXET directly reads iframe DOM, scroll, and event data, custom apps must be served from the same origin as UXET.