You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Would it be helpful to offer APIs that use pre-defined ring buffers to reduce garbage collection and maintain low latency? SharedArrayBuffer (SAB) could also be used for cross-realm/thread processing and browser support is returning.
Additionally, would it be helpful to control the decoder by specifying how many samples/frames to decode per call? We could decode quickly at first for low-latency playback and then gradually increase frame sizes after we have enough decoded data for playback continuity.
For example, consider a streaming audio AudioWorklet where GC is reduced using ring buffers and specifying 128 samples to decode synchronously (relates to #19).
audio-worklet-processer.js
// ring buffer of encoded bytes (set by "onmessage" or SAB from main/worker thread)inputBuffer=newArrayBuffer(...)// ring buffers for decoded stereo 2.5s PCM @ 48,000hzoutLeft=newArrayBuffer(Float32Array.BYTES_PER_ELEMENT*48000*2.5)// 469KoutRight=newArrayBuffer(Float32Array.BYTES_PER_ELEMENT*48000*2.5)// 469K// decoded PCM samplessamplesLeft=newFloat32Array(outLeft)// 120,000 samplessamplesRight=newFloat32Array(outRight)// 120,000 samples// new stereo decoder (could also be on Worker/main thread via SAB)decoder=newAudioDecoder({srcBuffer: inputBuffer,outputBuffers: [outLeft,outRight]})// buffer read/write index valuesinStart,inEnd,outStart,outEnd// return values after decode() calltotalSrcBytesUsed,totalSamplesDecoded// AudioWorkletProcessor.process - processes 128 frames per quantumprocess(inputs_NOT_USED,outputs){// specify the max samples to decode (could also be called on Worker/main thread){totalSrcBytesUsed,totalSamplesDecoded}=decoder.decode({maxToDecode: 128})// update src & output buffers read/write indexes...// output decoded [samplesLeft, samplesRight] to @outputs
...
}
The text was updated successfully, but these errors were encountered:
anthumchris
changed the title
Granularity for Memory Usage and Decoding Length
Memory Reuse and Decode Length
May 26, 2020
We've had the BYOB request from WASM folks as well, and we're keen to do something here. Unfortunately, using a SAB for output creates security concerns because the decoder may internally reference a decoded frame for sometime while it continues to decode later frames. Apps could manipulate the SAB during this period and cause crashes.
We're involved in a cross team (WASM, WebGPU, ...) discussion for memory re-use / reducing copies. The WebCodecs position is here: WICG/reducing-memory-copies#1
I'll go ahead and close and continue tracking in that repo.
Would it be helpful to offer APIs that use pre-defined ring buffers to reduce garbage collection and maintain low latency?
SharedArrayBuffer
(SAB) could also be used for cross-realm/thread processing and browser support is returning.Additionally, would it be helpful to control the decoder by specifying how many samples/frames to decode per call? We could decode quickly at first for low-latency playback and then gradually increase frame sizes after we have enough decoded data for playback continuity.
For example, consider a streaming audio AudioWorklet where GC is reduced using ring buffers and specifying 128 samples to decode synchronously (relates to #19).
audio-worklet-processer.js
The text was updated successfully, but these errors were encountered: