{sysaudio,sysjs}: Implement playback with data_callback #413
No reviewers
Labels
No labels
CI
all
basisu
blog
bug
build
contributor-friendly
core
correctness
deferred
dev
direct3d-headers
docs
driver-os-issue
duplicate
dxcompiler
editor
examples
experiment
feature-idea
feedback
flac
freetype
gamemode
gkurve
glfw
gpu
gpu-dawn
harfbuzz
help welcome
in-progress
infrastructure
invalid
libmach
linux-audio-headers
long-term
mach
mach.gfx
mach.math
mach.physics
mach.testing
model3d
needs-triage
object
opengl-headers
opus
os/linux
os/macos
os/wasm
os/windows
package-manager
priority
proposal
proposal-accepted
question
roadmap
slipped
stability
sysaudio
sysgpu
sysjs
validating-fix
vulkan-zig-generated
wayland-headers
website
wontfix
wrench
www
x11-headers
xcode-frameworks
zig-update
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
hexops/mach!413
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "ay-webaudio-playback"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Read individual commit messages for more info
The audio playback implementation itself is quite standard, and processing is done on per sample basis. Currently sample format in data callback is hard coded to f32. Even though WebAudio doesn't supports any other format, it should be a []u8 (probably) so that use can bit cast it as per they want.
The callback itself along with user_data are stored on JS side. JS can technically store opaque pointers, but we dont have any custom function to handle it yet, so just cast to and fro f64, it should be cleaned up someday.
Gonna merge this because it's clearly an improvement over the status quo today. :)
That said, I have some qualms / big concerns I will describe below:
This really should be user-configurable via the parameters passed to requestDevice. Auto could be the default option (which for now could default to 512.)
The reason this should be user configurable is because there will be applications where you e.g. need low latency audio for the application to work / make sense at all, and you'd rather have the user experience glitchy audio than high latency.
This is really bad & very expensive. This would get called 44.1k times per second in a typical case, and this design prevents us from considering multiple channels audio in the same callback I believe? We should have this callback take a small buffer of samples instead, it would be much more efficient.
Thoughts?
You're right, or maybe it can even be auto calculated based on some other params, I m not sure what it could be then.
Also agree. This does not prevent us from doing multiple channel audio though, We handle that inside onaudioprocess event. For a single event, it would run
512 * num_channelstimes. The solution is to use a Float32Array which I have done successfully in a different branch but needs further cleanup and sysjs support.