Vision Lab — VR
Point at a glyph in your headset, pull the trigger, and a vision-language model running on-device describes what's in front of you. Compatible candidates orbit an anchored star in your view, sized and tilted by their real classifier-supplied orbital parameters. Frames never leave the browser.
- …WebXR (immersive-vr)
- …WebGPU
- …Origin Private File System (cache)
Press Begin to download the model. Subsequent visits load in seconds.
Find candidates routes via the Meridian MCP — no PAT, no setup. Llama-3.3-70B authors candidates and the orbital classifier ranks them server-side.
No bundler — modules load from esm.sh on first hit and
are then cached by the browser. Open in the Quest browser, complete
the model download, then press the VR button.