All frames of a video rendered simultaneously as stacked semi-transparent planes — treating time as a navigable spatial axis. Orbit the full timeline as a physical object. Gemini AI search lets you query frames by natural language.
Three.js · WebXR · GLSL Vertex Shader → Particle Field
A source video and depth map are displaced by a custom GLSL vertex shader — extruding a dense particle grid along Z per pixel to create an orbitable 3D point cloud. A WebXR companion anchors the same depth mesh to real-world surfaces via plane detection and hand tracking on Quest 3.
Reference photos of David Chipperfield's brutalist SSENSE flagship reconstructed into a navigable 3D world via World Labs' Marble model. Outputs .spz at 100k–full resolution with collider meshes, explorable in-browser.
A single photograph enters a feedforward neural network and exits as a full 3D Gaussian Splat in under a second. No multi-view capture. No photogrammetry pipeline. Pure inference.