{sysaudio,sysjs}: Implement playback with data_callback #413

Merged
iddev5 merged 4 commits from ay-webaudio-playback into main 2022-07-17 16:48:26 +00:00
iddev5 commented 2022-07-17 13:01:53 +00:00 (Migrated from github.com)

Read individual commit messages for more info

The audio playback implementation itself is quite standard, and processing is done on per sample basis. Currently sample format in data callback is hard coded to f32. Even though WebAudio doesn't supports any other format, it should be a []u8 (probably) so that use can bit cast it as per they want.

The callback itself along with user_data are stored on JS side. JS can technically store opaque pointers, but we dont have any custom function to handle it yet, so just cast to and fro f64, it should be cleaned up someday.

  • By selecting this checkbox, I agree to license my contributions to this project under the license(s) described in the LICENSE file, and I have the right to do so or have received permission to do so by an employer or client I am producing work for whom has this right.
Read individual commit messages for more info The audio playback implementation itself is quite standard, and processing is done on per sample basis. Currently sample format in data callback is hard coded to f32. Even though WebAudio doesn't supports any other format, it should be a []u8 (probably) so that use can bit cast it as per they want. The callback itself along with user_data are stored on JS side. JS can technically store opaque pointers, but we dont have any custom function to handle it yet, so just cast to and fro f64, it should be cleaned up someday. - [X] By selecting this checkbox, I agree to license my contributions to this project under the license(s) described in the LICENSE file, and I have the right to do so or have received permission to do so by an employer or client I am producing work for whom has this right.
emidoots (Migrated from github.com) approved these changes 2022-07-17 16:48:12 +00:00
emidoots (Migrated from github.com) left a comment

Gonna merge this because it's clearly an improvement over the status quo today. :)

That said, I have some qualms / big concerns I will describe below:

sysaudio: webaudio: Set internal buffer size to 512

It was 4096 before. Lower values means lower latency but higher values are needed for clear audio. It should be properly tested across multiple browsers to find out the correct default value, or create a formula to auto calculate it. It should always be a multiple of 2 in the range 256, 512, ..., 8192, 16384.

This really should be user-configurable via the parameters passed to requestDevice. Auto could be the default option (which for now could default to 512.)

The reason this should be user configurable is because there will be applications where you e.g. need low latency audio for the application to work / make sense at all, and you'd rather have the user experience glitchy audio than high latency.

processing is done on per sample basis

pub const DataCallback = fn (..., sample: *f32) void;

This is really bad & very expensive. This would get called 44.1k times per second in a typical case, and this design prevents us from considering multiple channels audio in the same callback I believe? We should have this callback take a small buffer of samples instead, it would be much more efficient.

Thoughts?

Gonna merge this because it's clearly an improvement over the status quo today. :) That said, I have some qualms / big concerns I will describe below: > sysaudio: webaudio: Set internal buffer size to 512 > > It was 4096 before. Lower values means lower latency but higher values are needed for clear audio. It should be properly tested across multiple browsers to find out the correct default value, or create a formula to auto calculate it. It should always be a multiple of 2 in the range 256, 512, ..., 8192, 16384. This really should be user-configurable via the parameters passed to requestDevice. Auto could be the default option (which for now could default to 512.) The reason this should be user configurable is because there will be applications where you e.g. *need* low latency audio for the application to work / make sense at all, and you'd rather have the user experience glitchy audio than high latency. > processing is done on per sample basis ```zig pub const DataCallback = fn (..., sample: *f32) void; ``` This is really bad & very expensive. This would get called 44.1k times per second in a typical case, and this design prevents us from considering multiple channels audio in the same callback I believe? We should have this callback take a small buffer of samples instead, it would be *much* more efficient. Thoughts?
iddev5 commented 2022-07-17 17:35:47 +00:00 (Migrated from github.com)

This really should be user-configurable

You're right, or maybe it can even be auto calculated based on some other params, I m not sure what it could be then.

This is really bad & very expensive.

Also agree. This does not prevent us from doing multiple channel audio though, We handle that inside onaudioprocess event. For a single event, it would run 512 * num_channels times. The solution is to use a Float32Array which I have done successfully in a different branch but needs further cleanup and sysjs support.

> This really should be user-configurable You're right, or maybe it can even be auto calculated based on some other params, I m not sure what it could be then. > This is really bad & very expensive. Also agree. This does not prevent us from doing multiple channel audio though, We handle that inside onaudioprocess event. For a single event, it would run ``512 * num_channels`` times. The solution is to use a Float32Array which I have done successfully in a different branch but needs further cleanup and sysjs support.
Sign in to join this conversation.
No reviewers
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
hexops/mach!413
No description provided.