D
DreamLake

API

Hooks

Live Playground

Play with the timeline and observe how each hook updates in real time. Notice the different update frequencies for useClockValue at 4, 10, and 30 fps.

Loading playground...
import {
  useTimeline,
  ClockProvider,
  TimelineController,
  usePlaylist,
  useSegment,
  useClockValue,
} from '@vuer-ai/vuer-m3u';

function ClockDisplay({ fps }) {
  const time = useClockValue(fps);
  return <div>{fps}fps · {time.toFixed(2)}s</div>;
}

function Playground() {
  const { clock, state, play, pause, seek, setPlaybackRate, setLoop } = useTimeline();
  const { engine, playlist } = usePlaylist({ url: '/annotations.m3u8' });
  const { data, segment, loading } = useSegment(engine);

  return (
    <ClockProvider clock={clock}>
      <TimelineController state={state} onPlay={play} onPause={pause} onSeek={seek}
        onSpeedChange={setPlaybackRate} onLoopChange={setLoop} />
      <ClockDisplay fps={4} />
      <ClockDisplay fps={10} />
      <ClockDisplay fps={30} />
      <div>segment #{segment?.index ?? '—'} · {(data?.length ?? 0)} entries</div>
    </ClockProvider>
  );
}

The 4-layer hook model

Four hooks, each with one job:

#HookWhat it gives you
1useSegmentRaw decoded payload of the current segment (format-agnostic)
2useSegmentTrackCurrent segment → columnar {times, values, stride} for fast lookup
3useMergedTrackCurrent segment + contiguous neighbors merged into one columnar track
4useTrackSampleGiven a merged track, return the sample at time (supports lerp / step / nearest / slerpQuat / custom)
useSegment                        ← raw decoded current segment

    ├─ useSegmentTrack            ← normalize current segment → columnar tracks

    └─ useMergedTrack             ← normalize + merge across contiguous segments

           └─ useTrackSample      ← sample a track at a precise time

Which to pick

  • Discrete events per segment (action labels, VTT cues) → useSegment. Stop here.
  • Fast lookup inside one segment only (current chunk inspector, custom merge logic) → useSegmentTrack.
  • Smooth scrubbing with interpolation across chunk boundaries (IMU, joints, pose — the common case) → useMergedTrack + useTrackSample.
  • Custom fallback at a time (step / nearest / no interpolator) → still useTrackSample; choose the interpolator to match your data.

Clock resolution: ClockProvider + useClockContext

Every consumer in this module accepts an optional clock argument. When omitted, the hook or view reads the clock from the nearest <ClockProvider>. If neither is available, the call throws so the misconfiguration is caught at render.

import { useTimeline, ClockProvider, ActionLabelView, JointAngleView } from '@vuer-ai/vuer-m3u';
 
function App() {
  const { clock } = useTimeline();
  return (
    <ClockProvider clock={clock}>
      <ActionLabelView src="/actions.m3u8" />
      <JointAngleView src="/joints.m3u8" />
    </ClockProvider>
  );
}

Pass clock={…} explicitly to override (e.g., a preview timeline alongside the main one).

function ClockProvider(props: { clock: TimelineClock; children: ReactNode }): JSX.Element
function useClockContext(explicit?: TimelineClock | null): TimelineClock

useTimeline

Creates a TimelineClock and returns discrete state that only re-renders on seek events — not every frame.

function useTimeline(
  duration?: number,
  externalClock?: TimelineClock | null,
): {
  clock: TimelineClock;
  state: TimelineState;
  play: () => void;
  pause: () => void;
  seek: (t: number) => void;
  setPlaybackRate: (r: number) => void;
  setLoop: (v: boolean) => void;
}

Pass externalClock to adopt an existing TimelineClock instead of creating a new one — useful when <TimelineContainer> and <TrackerContainer> need to share the same clock. Ownership is locked at the first render: the hook only calls clock.destroy() on the clocks it created itself.

TimelineState:

FieldTypeDescription
durationnumberTotal duration in seconds
playingbooleanPlayback state
playbackRatenumberSpeed (1 = normal, 2 = 2x)
loopbooleanLoop enabled

state does not contain currentTime. Use useClockValue(fps) for time.


useClockValue

Returns clock.time throttled to N frames per second.

function useClockValue(fps: number, clock?: TimelineClock | null): number
ParameterTypeDescription
fpsnumberUpdate frequency (e.g. 4, 10, 30)
clockTimelineClock | nullOptional — falls back to <ClockProvider>

Updates immediately on seek events regardless of fps.

const time = useClockValue(30);  // scrubber position
const time = useClockValue(10);  // segment boundary check
const time = useClockValue(4);   // highlight active entry

usePlaylist

Creates a Playlist, fetches and parses the m3u8 playlist, and syncs duration to a clock.

function usePlaylist(
  options: PlaylistOptions,
  clock?: TimelineClock | null,
): {
  engine: Playlist | null;
  playlist: ParsedPlaylist | null;
  loading: boolean;
  error: Error | null;
}

PlaylistOptions:

OptionTypeDefaultDescription
urlstringrequiredPlaylist URL
decoderSegmentDecoderautoPer-engine decoder function
cacheSizenumber20LRU max cached segments
prefetchCountnumber2Segments to prefetch ahead
pollIntervalnumbertargetDuration * 1000Live poll interval (ms)
fetchFntypeof fetchfetchCustom fetch function
baseUrlstringderived from urlBase URL for relative segment paths

Calls clock.extendDuration() on init and on every live update.


useSegment

Layer 1 — the primitive. Returns whatever the decoder produced for the currently-active segment. Format-agnostic.

function useSegment<T = unknown>(
  engine: Playlist | null,
  clock?: TimelineClock | null,
): SegmentState<T>

When to use

  • Each segment is a self-contained unit (event list, VTT text, log chunk).
  • You don't need ordered / columnar access.
  • The view renders whatever is in the current segment and re-renders when the segment changes.

When not to use

  • You need fast per-sample lookup at arbitrary times — layer up to useSegmentTrack (one chunk) or useMergedTrack (cross-chunk).

SegmentState<T>:

FieldTypeDescription
dataT | nullDecoded segment data
segmentPlaylistSegment | nullActive segment metadata
loadingbooleanLoading state
errorError | nullError state

Tracks segment boundaries locally at ~10fps. Multiple useSegment hooks with different engines on the same clock work correctly — each tracks its own playlist's boundaries independently.


useSegmentTrack

Layer 2 — one segment, normalized. Takes the current segment's decoded payload and converts it into columnar {times, values, stride} tracks — ordered and binary-searchable — without merging across segment boundaries.

function useSegmentTrack<T = unknown>(
  engine: Playlist | null,
  clock?: TimelineClock | null,
  options?: SegmentTrackOptions<T>,
): SegmentTrackState

SegmentTrackOptions<T>:

OptionTypeDefaultDescription
normalizeNormalizer<T>samplesNormalizerConvert decoded segment → Map<string, TrackSamples>

SegmentTrackState:

FieldTypeDescription
tracksMap<string, TrackSamples>Columnar tracks for the current segment
segmentPlaylistSegment | nullWhich segment produced the tracks
loadingbooleanLoading state
errorError | nullError state

When to use

  • You want ordered samples for binary-search lookup inside the current chunk only.
  • You're implementing a custom cross-segment merge strategy and want the per-segment building block.
  • Memory matters — one segment's worth of data instead of the ~5-segment window merged by useMergedTrack.

When not to use

  • You need interpolation that spans chunk boundaries — jump to useMergedTrack.
  • The source is discrete events with no meaningful times array — stay on useSegment.

useMergedTrack

Layer 3 — current segment + neighbors, merged. Fetches a window of segments around the current playback position, normalizes each one, and concatenates contiguous chunks into single Float32Arrays. This is what powers smooth interpolation across chunk boundaries.

function useMergedTrack<T = unknown>(
  engine: Playlist | null,
  clock?: TimelineClock | null,
  options?: MergedTrackOptions<T>,
): MergedTrackState

Returns multi-track data

The return type is Map<string, TrackSamples> — one entry per named channel. The default samplesNormalizer emits a single track named "data"; a custom normalizer may emit several (see PoseView).

{
  tracks: Map<string, TrackSamples>;   // channel name → merged columnar data
  loadedSegments: Set<number>;
  mergedRange: [number, number] | null;
  loading: boolean;
}

Why multi-track?

Multiple channels sharing one m3u8 has concrete payoff:

  • Per-channel interpolators. useTrackSample picks the interpolator per track. A pose stream carries position (lerp-compatible) and quaternion (slerpQuat required) side by side; splitting them into two tracks is the only correct way. Packing into one fat stride would force one interpolator for everything.
  • One engine, one prefetch window. Both channels come from the same m3u8, so one Playlist cache, one prefetch decision, and one set of merged segment boundaries covers them. Splitting into two playlists would double network requests and force manual timeline alignment.
  • Shared times array. When a normalizer emits multiple tracks from the same JSONL rows (like PoseView), they can reuse the same Float32Array of timestamps — only per-channel values buffers are separate.
  • Zero cost when unused. The default normalizer produces one track; reaching for the "data" key via tracks.get('data') is one extra line over a direct return.

When to use

  • Continuous numeric time-series that needs smooth interpolation (sensor, state, trajectory).
  • You need a custom normalize that splits one stream into multiple named channels.
  • You need merge metadata — loading, loadedSegments, mergedRange.

When not to use

  • Single chunk is enough — stay on useSegmentTrack (lighter, no prefetch).
  • Discrete events with ts/te extents — use useSegment.
  • Data shape varies per chunk (different stride) — each chunk must normalize to the same track names and stride.

MergedTrackOptions<T>:

OptionTypeDefaultDescription
normalizeNormalizer<T>samplesNormalizerConvert decoded segment → Map<string, TrackSamples>

MergedTrackState:

FieldTypeDescription
tracksMap<string, TrackSamples>Merged track data per named track
loadedSegmentsSet<number>Segment indices loaded
mergedRange[number, number] | nullContiguous segment index range
loadingbooleanLoading state

TrackSamples:

FieldTypeDescription
timesFloat32ArrayKeyframe timestamps
valuesFloat32ArrayInterleaved values (length = times.length * stride)
stridenumberValues per sample

Gap safety: only the longest contiguous chain of segments around the current playback position is merged. This prevents interpolation across a missing chunk.

Reference stability: normalize is stored in a useRef updated each render, so passing a fresh function every render does not trigger effect re-runs.


useTrackSample

Layer 4 — sample a track at time. Returns a Float32Array of length track.stride via a pluggable interpolator. Returns null when the track is absent or has fewer than 2 samples.

function useTrackSample(
  track: TrackSamples | undefined,
  time: number,
  interp?: Interpolator,     // default: lerp
): Float32Array | null

When to use

  • You have a TrackSamples (from useMergedTrack or useSegmentTrack) and want the value at time.
  • You need a specific interpolator — slerpQuat for orientations, step or nearest for categorical / non-numeric-interpolation data.
  • Building a view that reads time from useClockValue and displays numeric readouts / bars / plots.

When not to use

  • As the "main" data hook — it does not subscribe to the clock. Pair with useClockValue (or clock.on('tick') for 60fps Canvas).
  • Inside an imperative render loop — use the pure helper sampleTrack instead (see below).

The output buffer is reused across renders — do not retain it across renders.

Shipped interpolators:

InterpolatorShapeBehavior
lerpscalar, vec2/3/4, vecNPer-component linear interpolation
stepanyHold previous sample (out = a)
nearestanyPick closer endpoint (alpha < 0.5 ? a : b)
slerpQuatquaternion [x, y, z, w] (stride = 4)Spherical linear, shortest-arc

You can also write your own — an Interpolator is any (a, b, alpha, out) => void that writes a length-stride result into out.

Example — interpolate position + orientation at 30fps:

import { useClockValue, useMergedTrack, useTrackSample, slerpQuat } from '@vuer-ai/vuer-m3u';
 
const { tracks } = useMergedTrack(engine);
const time = useClockValue(30);
 
const position = useTrackSample(tracks.get('position'), time);
const orientation = useTrackSample(tracks.get('orientation'), time, slerpQuat);

Imperative loops — sampleTrack

For Canvas rendering at 60fps inside clock.on('tick'), calling hooks is wrong. Use the pure helper:

import { sampleTrack, lerp } from '@vuer-ai/vuer-m3u';
 
const hint = { value: 0 };
const out = new Float32Array(3);
 
clock.on('tick', () => {
  sampleTrack(track, clock.time, lerp, hint, out);
  // draw using out[0], out[1], out[2]
});

Same bracket-search + interpolation, zero React involvement.

See Custom Views — continuous data for full examples.