D
DreamLake

API

Custom Decoders

Chunk decoders turn raw response bytes into whatever the rest of the pipeline wants to consume. vuer-m3u ships three built-ins and accepts custom ones via registerDecoder.

Built-in decoders

ExtensionDecoderReturns
.jsonljsonlDecoderunknown[] — one parsed JSON value per line
.vtttextDecoderstring — UTF-8 decoded; caller parses cues
.tsrawDecoderArrayBuffer — pass-through (MPEG-TS goes to hls.js, not the Playlist engine)
otherrawDecoderArrayBuffer — pass-through fallback

Dispatch is by file extension — the m3u8 parser notes the extension of the first segment, looks it up, and wires it to the loader.

SegmentDecoder interface

type SegmentDecoder = (raw: ArrayBuffer) => unknown

A function — synchronous — that takes raw bytes and returns decoded data. Whatever shape the decoder returns is the shape useSegment delivers to your view or lane. Subsequent hooks (useSegmentTrack, useMergedTrack) call an optional normalize step on top of this.

Registering globally

import { registerDecoder } from '@vuer-ai/vuer-m3u'
import { decode as decodeMsgpack } from '@msgpack/msgpack'
 
registerDecoder('mpk', (raw) => decodeMsgpack(new Uint8Array(raw)))

After registration, any track whose playlist references .mpk chunks will use this decoder. Registration is process-global — call it once at app bootstrap, before any <TimelineContainer> or view mounts.

Registering per-engine

For decoders that should only apply to one track (e.g. a specialized binary format for a proprietary sensor), pass the decoder directly:

import { Playlist } from '@vuer-ai/vuer-m3u'
 
const playlist = new Playlist({
  url: '/my-track/playlist.m3u8',
  decoder: (raw) => myProprietaryDecoder(raw),
})

Per-engine decoders override the global lookup for that one track.

When you need a normalizer too

If the decoded shape doesn't match the lane/view's expected {ts, data} shape, compose a Normalizer<T> into useMergedTrack:

import { useMergedTrack } from '@vuer-ai/vuer-m3u'
 
const { tracks } = useMergedTrack(engine, clock, {
  normalize: (decoded) => {
    // decoded is whatever your decoder returned
    const samples = decoded as MyFormatRow[]
    return new Map([
      ['data', {
        times: new Float32Array(samples.map((s) => s.timestamp_us / 1e6)),
        values: new Float32Array(samples.flatMap((s) => s.joints)),
        stride: samples[0]?.joints.length ?? 0,
      }],
    ])
  },
})

The decoder handles format (bytes → structured data). The normalizer handles layout (structured data → columnar TrackSamples). Keep them separate — a decoder should not know about stride or Float32Array.

Examples

MessagePack

import { decode as decodeMsgpack } from '@msgpack/msgpack'
registerDecoder('mpk', (raw) => decodeMsgpack(new Uint8Array(raw)))

Arrow (streaming format)

import * as arrow from 'apache-arrow'
registerDecoder('arrow', (raw) => {
  const table = arrow.tableFromIPC(new Uint8Array(raw))
  return table.toArray() // array of structured rows
})

Parquet

Parquet browser support is spotty — typical pattern is to use a WASM decoder (e.g. parquet-wasm). Heavier than JSONL per request, but useful for dense numeric columns.

import { readParquet } from 'parquet-wasm'
registerDecoder('parquet', (raw) => readParquet(new Uint8Array(raw)))

Custom binary layout

// Fixed 24-byte-per-sample format: [ts: f64, data: [f32; 4]]
registerDecoder('bin', (raw) => {
  const view = new DataView(raw)
  const count = raw.byteLength / 24
  const samples: { ts: number; data: number[] }[] = []
  for (let i = 0; i < count; i++) {
    const ts = view.getFloat64(i * 24, true)
    const d = [
      view.getFloat32(i * 24 + 8, true),
      view.getFloat32(i * 24 + 12, true),
      view.getFloat32(i * 24 + 16, true),
      view.getFloat32(i * 24 + 20, true),
    ]
    samples.push({ ts, data: d })
  }
  return samples
})

Related

  • M3U8 Transport — how chunks arrive at the decoder
  • Dtypes — register a dtype that matches your new chunk format
  • Hooks — layer a normalizer on top of your decoder