Experiences Gallery
Community-driven immersive EEG visualizations — lazy-loaded, card-based launcher, simple registry for contributors.
Built-in Experiences
| Experience | Description |
|---|---|
| Neural Wave Space | Three.js 3D arc of 16 wave strips with amplitude-responsive color, starfield, WebXR + hand tracking |
| Blink Browser | Scroll articles via eye blinks; per-user calibration; frontal electrode monitoring |
| Neural Sonification | Brainwaves → live music; bands mapped to drone, FM pad, lead, harmonics, shimmer; DJ controls |
| VRChat OSC | Stream band powers into VRChat; chatbox + avatar parameter output; live config UI |
| Spoon Bend | Matrix-style telekinesis controlled by focus/beta/gamma; 3D spoon + digital rain |
| Webhook Wizard | Guided 60-second first-webhook setup; live EEG feedback; IFTTT/Zapier templates |
| Eye Track | EOG-based gaze estimation from Fp1/Fp2; polynomial ridge regression with online adaptive learning; save/load models; live algorithm editor |
Creating an Experience
Time to first playable: ~15 minutes. One .tsx file + one line in the registry.
Create your component
Create dashboard/src/experiences/my-game/MyGame.tsx:
import { useRef, useEffect } from "react";
import type { ExperienceProps } from "../registry";
import { useFocus, useRelax, useBlink } from "../../hooks/detectors";
export default function MyGame({ eegData, onExit }: ExperienceProps) {
const { state: focus } = useFocus(eegData);
const { state: relax } = useRelax(eegData);
const { state: blink } = useBlink(eegData);
const canvasRef = useRef<HTMLCanvasElement>(null);
useEffect(() => {
let raf: number;
function loop() {
const f = focus.current.focus; // 0–1
const r = relax.current.relaxation; // 0–1
const b = blink.current.blinked; // true for one cycle per blink
// --- your game logic here ---
raf = requestAnimationFrame(loop);
}
raf = requestAnimationFrame(loop);
return () => cancelAnimationFrame(raf);
}, []);
return (
<div style={{ position: "fixed", inset: 0, background: "#000" }}>
<canvas ref={canvasRef} />
<button onClick={onExit} style={{ position: "absolute", top: 12, left: 12 }}>
← Exit
</button>
</div>
);
}Register in the gallery
Add to dashboard/src/experiences/registry.ts:
const MyGameExperience = lazy(() => import("./my-game/MyGame"));
// Add to EXPERIENCES array:
{
id: "my-game",
name: "My Game",
description: "One-sentence summary.",
tag: "Focus",
gradient: ["#ec4899", "#8b5cf6"],
component: MyGameExperience,
author: "Your Name",
}The gallery picks it up automatically. Each experience is code-split — no impact on initial load.
Advanced: Eye Track (EOG Gaze Estimation)
The Eye Track experience demonstrates a more advanced pattern — direct ring-buffer signal extraction, multi-phase calibration, and a trained ML model.
How it works
- EOG extraction — Horizontal gaze ≈ Fp2 − Fp1 (differential), Vertical ≈ (Fp1 + Fp2) / 2 (common-mode). The corneal-retinal dipole (~0.4–1.0 mV) shifts proportionally with gaze angle.
- 5-point calibration — The user fixates on targets (center, up, down, left, right) for 2.5 s each. Mean EOG features are collected per target.
- Polynomial ridge regression — Features
[1, h, v, h², h·v, v²]are fit via ridge regression (λ = 0.01) using Gaussian elimination. This captures nonlinear eye response at extreme angles. - Online adaptive learning — During tracking, new (EOG, target) pairs are collected ~4 Hz and the model refits every 12 samples. Users can pause/resume learning.
- Persistence — The trained model + all samples save to
localStorageand can be loaded on next session. - Algorithm editor — Users can edit the gaze-estimation function live in a code panel.
Scientific basis
With polynomial regression and continuous learning, the model adapts to individual electrode placement and improves over time.
Key code patterns
// Direct ring-buffer read (no hooks)
function readEOGFeatures(eeg: EEGData, windowSamples: number) {
const fp1 = eeg.buffers.current[0]; // Fp1
const fp2 = eeg.buffers.current[1]; // Fp2
// Slide backwards from writeIndex
for (let i = 0; i < windowSamples; i++) {
const idx = (writeIndex - windowSamples + i + bufferSize) % bufferSize;
sumH += fp2[idx] - fp1[idx]; // horizontal
sumV += (fp1[idx] + fp2[idx]) * 0.5; // vertical
}
return { hEOG: sumH / windowSamples, vEOG: sumV / windowSamples };
}
// Polynomial feature expansion
const feat = [1, h, v, h*h, h*v, v*v]; // 6 features
let x = 0;
for (let i = 0; i < feat.length; i++) x += feat[i] * model.weightsX[i];