API
Hooks
Live Playground
Play with the timeline and observe how each hook updates in real time. Notice the different update frequencies for useClockValue at 4, 10, and 30 fps.
import {
useTimeline,
ClockProvider,
TimelineController,
usePlaylist,
useSegment,
useClockValue,
} from '@vuer-ai/vuer-m3u';
function ClockDisplay({ fps }) {
const time = useClockValue(fps);
return <div>{fps}fps · {time.toFixed(2)}s</div>;
}
function Playground() {
const { clock, state, play, pause, seek, setPlaybackRate, setLoop } = useTimeline();
const { engine, playlist } = usePlaylist({ url: '/annotations.m3u8' });
const { data, segment, loading } = useSegment(engine);
return (
<ClockProvider clock={clock}>
<TimelineController state={state} onPlay={play} onPause={pause} onSeek={seek}
onSpeedChange={setPlaybackRate} onLoopChange={setLoop} />
<ClockDisplay fps={4} />
<ClockDisplay fps={10} />
<ClockDisplay fps={30} />
<div>segment #{segment?.index ?? '—'} · {(data?.length ?? 0)} entries</div>
</ClockProvider>
);
}The 4-layer hook model
Four hooks, each with one job:
| # | Hook | What it gives you |
|---|---|---|
| 1 | useSegment | Raw decoded payload of the current segment (format-agnostic) |
| 2 | useSegmentTrack | Current segment → columnar {times, values, stride} for fast lookup |
| 3 | useMergedTrack | Current segment + contiguous neighbors merged into one columnar track |
| 4 | useTrackSample | Given a merged track, return the sample at time (supports lerp / step / nearest / slerpQuat / custom) |
useSegment ← raw decoded current segment
│
├─ useSegmentTrack ← normalize current segment → columnar tracks
│
└─ useMergedTrack ← normalize + merge across contiguous segments
│
└─ useTrackSample ← sample a track at a precise timeWhich to pick
- Discrete events per segment (action labels, VTT cues) →
useSegment. Stop here. - Fast lookup inside one segment only (current chunk inspector, custom merge logic) →
useSegmentTrack. - Smooth scrubbing with interpolation across chunk boundaries (IMU, joints, pose — the common case) →
useMergedTrack+useTrackSample. - Custom fallback at a time (step / nearest / no interpolator) → still
useTrackSample; choose the interpolator to match your data.
Clock resolution: ClockProvider + useClockContext
Every consumer in this module accepts an optional clock argument. When omitted, the hook or view reads the clock from the nearest <ClockProvider>. If neither is available, the call throws so the misconfiguration is caught at render.
import { useTimeline, ClockProvider, ActionLabelView, JointAngleView } from '@vuer-ai/vuer-m3u';
function App() {
const { clock } = useTimeline();
return (
<ClockProvider clock={clock}>
<ActionLabelView src="/actions.m3u8" />
<JointAngleView src="/joints.m3u8" />
</ClockProvider>
);
}Pass clock={…} explicitly to override (e.g., a preview timeline alongside the main one).
function ClockProvider(props: { clock: TimelineClock; children: ReactNode }): JSX.Element
function useClockContext(explicit?: TimelineClock | null): TimelineClockuseTimeline
Creates a TimelineClock and returns discrete state that only re-renders on seek events — not every frame.
function useTimeline(
duration?: number,
externalClock?: TimelineClock | null,
): {
clock: TimelineClock;
state: TimelineState;
play: () => void;
pause: () => void;
seek: (t: number) => void;
setPlaybackRate: (r: number) => void;
setLoop: (v: boolean) => void;
}Pass externalClock to adopt an existing TimelineClock instead of creating a new one — useful when <TimelineContainer> and <TrackerContainer> need to share the same clock. Ownership is locked at the first render: the hook only calls clock.destroy() on the clocks it created itself.
TimelineState:
| Field | Type | Description |
|---|---|---|
duration | number | Total duration in seconds |
playing | boolean | Playback state |
playbackRate | number | Speed (1 = normal, 2 = 2x) |
loop | boolean | Loop enabled |
state does not contain currentTime. Use useClockValue(fps) for time.
useClockValue
Returns clock.time throttled to N frames per second.
function useClockValue(fps: number, clock?: TimelineClock | null): number| Parameter | Type | Description |
|---|---|---|
fps | number | Update frequency (e.g. 4, 10, 30) |
clock | TimelineClock | null | Optional — falls back to <ClockProvider> |
Updates immediately on seek events regardless of fps.
const time = useClockValue(30); // scrubber position
const time = useClockValue(10); // segment boundary check
const time = useClockValue(4); // highlight active entryusePlaylist
Creates a Playlist, fetches and parses the m3u8 playlist, and syncs duration to a clock.
function usePlaylist(
options: PlaylistOptions,
clock?: TimelineClock | null,
): {
engine: Playlist | null;
playlist: ParsedPlaylist | null;
loading: boolean;
error: Error | null;
}PlaylistOptions:
| Option | Type | Default | Description |
|---|---|---|---|
url | string | required | Playlist URL |
decoder | SegmentDecoder | auto | Per-engine decoder function |
cacheSize | number | 20 | LRU max cached segments |
prefetchCount | number | 2 | Segments to prefetch ahead |
pollInterval | number | targetDuration * 1000 | Live poll interval (ms) |
fetchFn | typeof fetch | fetch | Custom fetch function |
baseUrl | string | derived from url | Base URL for relative segment paths |
Calls clock.extendDuration() on init and on every live update.
useSegment
Layer 1 — the primitive. Returns whatever the decoder produced for the currently-active segment. Format-agnostic.
function useSegment<T = unknown>(
engine: Playlist | null,
clock?: TimelineClock | null,
): SegmentState<T>When to use
- Each segment is a self-contained unit (event list, VTT text, log chunk).
- You don't need ordered / columnar access.
- The view renders whatever is in the current segment and re-renders when the segment changes.
When not to use
- You need fast per-sample lookup at arbitrary times — layer up to
useSegmentTrack(one chunk) oruseMergedTrack(cross-chunk).
SegmentState<T>:
| Field | Type | Description |
|---|---|---|
data | T | null | Decoded segment data |
segment | PlaylistSegment | null | Active segment metadata |
loading | boolean | Loading state |
error | Error | null | Error state |
Tracks segment boundaries locally at ~10fps. Multiple useSegment hooks with different engines on the same clock work correctly — each tracks its own playlist's boundaries independently.
useSegmentTrack
Layer 2 — one segment, normalized. Takes the current segment's decoded payload and converts it into columnar {times, values, stride} tracks — ordered and binary-searchable — without merging across segment boundaries.
function useSegmentTrack<T = unknown>(
engine: Playlist | null,
clock?: TimelineClock | null,
options?: SegmentTrackOptions<T>,
): SegmentTrackStateSegmentTrackOptions<T>:
| Option | Type | Default | Description |
|---|---|---|---|
normalize | Normalizer<T> | samplesNormalizer | Convert decoded segment → Map<string, TrackSamples> |
SegmentTrackState:
| Field | Type | Description |
|---|---|---|
tracks | Map<string, TrackSamples> | Columnar tracks for the current segment |
segment | PlaylistSegment | null | Which segment produced the tracks |
loading | boolean | Loading state |
error | Error | null | Error state |
When to use
- You want ordered samples for binary-search lookup inside the current chunk only.
- You're implementing a custom cross-segment merge strategy and want the per-segment building block.
- Memory matters — one segment's worth of data instead of the ~5-segment window merged by
useMergedTrack.
When not to use
- You need interpolation that spans chunk boundaries — jump to
useMergedTrack. - The source is discrete events with no meaningful
timesarray — stay onuseSegment.
useMergedTrack
Layer 3 — current segment + neighbors, merged. Fetches a window of segments around the current playback position, normalizes each one, and concatenates contiguous chunks into single Float32Arrays. This is what powers smooth interpolation across chunk boundaries.
function useMergedTrack<T = unknown>(
engine: Playlist | null,
clock?: TimelineClock | null,
options?: MergedTrackOptions<T>,
): MergedTrackStateReturns multi-track data
The return type is Map<string, TrackSamples> — one entry per named channel. The default samplesNormalizer emits a single track named "data"; a custom normalizer may emit several (see PoseView).
{
tracks: Map<string, TrackSamples>; // channel name → merged columnar data
loadedSegments: Set<number>;
mergedRange: [number, number] | null;
loading: boolean;
}Why multi-track?
Multiple channels sharing one m3u8 has concrete payoff:
- Per-channel interpolators.
useTrackSamplepicks the interpolator per track. A pose stream carries position (lerp-compatible) and quaternion (slerpQuatrequired) side by side; splitting them into two tracks is the only correct way. Packing into one fat stride would force one interpolator for everything. - One engine, one prefetch window. Both channels come from the same m3u8, so one
Playlistcache, one prefetch decision, and one set of merged segment boundaries covers them. Splitting into two playlists would double network requests and force manual timeline alignment. - Shared
timesarray. When a normalizer emits multiple tracks from the same JSONL rows (likePoseView), they can reuse the sameFloat32Arrayof timestamps — only per-channelvaluesbuffers are separate. - Zero cost when unused. The default normalizer produces one track; reaching for the
"data"key viatracks.get('data')is one extra line over a direct return.
When to use
- Continuous numeric time-series that needs smooth interpolation (sensor, state, trajectory).
- You need a custom
normalizethat splits one stream into multiple named channels. - You need merge metadata —
loading,loadedSegments,mergedRange.
When not to use
- Single chunk is enough — stay on
useSegmentTrack(lighter, no prefetch). - Discrete events with
ts/teextents — useuseSegment. - Data shape varies per chunk (different stride) — each chunk must normalize to the same track names and stride.
MergedTrackOptions<T>:
| Option | Type | Default | Description |
|---|---|---|---|
normalize | Normalizer<T> | samplesNormalizer | Convert decoded segment → Map<string, TrackSamples> |
MergedTrackState:
| Field | Type | Description |
|---|---|---|
tracks | Map<string, TrackSamples> | Merged track data per named track |
loadedSegments | Set<number> | Segment indices loaded |
mergedRange | [number, number] | null | Contiguous segment index range |
loading | boolean | Loading state |
TrackSamples:
| Field | Type | Description |
|---|---|---|
times | Float32Array | Keyframe timestamps |
values | Float32Array | Interleaved values (length = times.length * stride) |
stride | number | Values per sample |
Gap safety: only the longest contiguous chain of segments around the current playback position is merged. This prevents interpolation across a missing chunk.
Reference stability: normalize is stored in a useRef updated each render, so passing a fresh function every render does not trigger effect re-runs.
useTrackSample
Layer 4 — sample a track at time. Returns a Float32Array of length track.stride via a pluggable interpolator. Returns null when the track is absent or has fewer than 2 samples.
function useTrackSample(
track: TrackSamples | undefined,
time: number,
interp?: Interpolator, // default: lerp
): Float32Array | nullWhen to use
- You have a
TrackSamples(fromuseMergedTrackoruseSegmentTrack) and want the value attime. - You need a specific interpolator —
slerpQuatfor orientations,stepornearestfor categorical / non-numeric-interpolation data. - Building a view that reads time from
useClockValueand displays numeric readouts / bars / plots.
When not to use
- As the "main" data hook — it does not subscribe to the clock. Pair with
useClockValue(orclock.on('tick')for 60fps Canvas). - Inside an imperative render loop — use the pure helper
sampleTrackinstead (see below).
The output buffer is reused across renders — do not retain it across renders.
Shipped interpolators:
| Interpolator | Shape | Behavior |
|---|---|---|
lerp | scalar, vec2/3/4, vecN | Per-component linear interpolation |
step | any | Hold previous sample (out = a) |
nearest | any | Pick closer endpoint (alpha < 0.5 ? a : b) |
slerpQuat | quaternion [x, y, z, w] (stride = 4) | Spherical linear, shortest-arc |
You can also write your own — an Interpolator is any (a, b, alpha, out) => void that writes a length-stride result into out.
Example — interpolate position + orientation at 30fps:
import { useClockValue, useMergedTrack, useTrackSample, slerpQuat } from '@vuer-ai/vuer-m3u';
const { tracks } = useMergedTrack(engine);
const time = useClockValue(30);
const position = useTrackSample(tracks.get('position'), time);
const orientation = useTrackSample(tracks.get('orientation'), time, slerpQuat);Imperative loops — sampleTrack
For Canvas rendering at 60fps inside clock.on('tick'), calling hooks is wrong. Use the pure helper:
import { sampleTrack, lerp } from '@vuer-ai/vuer-m3u';
const hint = { value: 0 };
const out = new Float32Array(3);
clock.on('tick', () => {
sampleTrack(track, clock.time, lerp, hint, out);
// draw using out[0], out[1], out[2]
});Same bracket-search + interpolation, zero React involvement.
See Custom Views — continuous data for full examples.