Ross Bencina
2009-05-04 17:06:52 UTC
Hi Everyone
I'm looking for some feedback on an implementation detail I'm wanting to
change and would appreciate any feedback you're able to give...
First, a couple of definitions:
"framesPerBuffer": this size of the buffer passed to a PA callback
"host buffer size": the size of buffers passed between PA and the host.
While the list was down Bjorn and I had a discussion about the interaction
between the framesPerBuffer parameter and the latency parameters to
Pa_OpenStream. The OSX PA version currently works a bit differently from the
way PA works on Windows (the closest comparison is with the ASIO
implementation since both CoreAudio and ASIO are double-buffered). And I'm
hoping to change the OSX implementation to address that*
If I understand things correctly, the basic issue is that on OSX, the host
buffer size is never made larger than framesPerBuffer, even if the latency
values are set higher than 2*framesPerBufferDuration.
The way it works with ASIO is for the host buffer to be the highest integer
multiple of framesPerBuffer such that the host buffer duration is less than
or equal to the specified latency. eg:
host buffer frames = N * framesPerBuffer
choose max N integer such that: duration(N * framesPerBuffer) <= Output
Latency
(actually sometimes it can be more complicated than that due to restrictions
on ASIO buffer sizes imposed by certain drivers, but that's a separate
matter)...
---
*the reason I want to change it is because my app expects a fixed
framesPerBuffer to be passed to its callback, and at the same time to be
able to adjust audio buffering (ie latency) by changing PortAudio's latency
parameters to increase stability. I think that's a pretty common use case.
Bjorn made the point that because CoreAudio is a two-buffer system,
increasing the host buffer size will not necessarily resolve stability
problems if an additional layer of buffering is required (which I conceed in
some cases it might be), however, at the lower end of buffer sizes (32, 64,
128, 256, 512) it is desirable to to provide control over the host buffer
size independent of framesPerBuffer especially at higher sample rates. For
example Logic audio provides i/o buffer settings up to 1024 samples. The
behavior I'm seeing in my app is that it runs fine with 256 sample buffers
at 44100 but at higher sample rates and with certain external hardware, it
needs bigger buffers.
Another thing to consider is that higher host buffer sizes mean more timing
jitter (if you're responding to real-time events in the callback) however PA
provides buffer timestamps to deal with that. And in any case, if people
don't want timing jitter they can just set the latency as low as the buffer
size (or lower).
In conclusion, would like to investigate what will be involved in changing
the OSX implementation to implement the host buffer frames = N *
framesPerBuffer scheme I described above. Perhaps there are some problems
I'm not seeing? In any case I'm keen to get any feedback I can before I do
put much more time into this...
Thanks!
Ross.
I'm looking for some feedback on an implementation detail I'm wanting to
change and would appreciate any feedback you're able to give...
First, a couple of definitions:
"framesPerBuffer": this size of the buffer passed to a PA callback
"host buffer size": the size of buffers passed between PA and the host.
While the list was down Bjorn and I had a discussion about the interaction
between the framesPerBuffer parameter and the latency parameters to
Pa_OpenStream. The OSX PA version currently works a bit differently from the
way PA works on Windows (the closest comparison is with the ASIO
implementation since both CoreAudio and ASIO are double-buffered). And I'm
hoping to change the OSX implementation to address that*
If I understand things correctly, the basic issue is that on OSX, the host
buffer size is never made larger than framesPerBuffer, even if the latency
values are set higher than 2*framesPerBufferDuration.
The way it works with ASIO is for the host buffer to be the highest integer
multiple of framesPerBuffer such that the host buffer duration is less than
or equal to the specified latency. eg:
host buffer frames = N * framesPerBuffer
choose max N integer such that: duration(N * framesPerBuffer) <= Output
Latency
(actually sometimes it can be more complicated than that due to restrictions
on ASIO buffer sizes imposed by certain drivers, but that's a separate
matter)...
---
*the reason I want to change it is because my app expects a fixed
framesPerBuffer to be passed to its callback, and at the same time to be
able to adjust audio buffering (ie latency) by changing PortAudio's latency
parameters to increase stability. I think that's a pretty common use case.
Bjorn made the point that because CoreAudio is a two-buffer system,
increasing the host buffer size will not necessarily resolve stability
problems if an additional layer of buffering is required (which I conceed in
some cases it might be), however, at the lower end of buffer sizes (32, 64,
128, 256, 512) it is desirable to to provide control over the host buffer
size independent of framesPerBuffer especially at higher sample rates. For
example Logic audio provides i/o buffer settings up to 1024 samples. The
behavior I'm seeing in my app is that it runs fine with 256 sample buffers
at 44100 but at higher sample rates and with certain external hardware, it
needs bigger buffers.
Another thing to consider is that higher host buffer sizes mean more timing
jitter (if you're responding to real-time events in the callback) however PA
provides buffer timestamps to deal with that. And in any case, if people
don't want timing jitter they can just set the latency as low as the buffer
size (or lower).
In conclusion, would like to investigate what will be involved in changing
the OSX implementation to implement the host buffer frames = N *
framesPerBuffer scheme I described above. Perhaps there are some problems
I'm not seeing? In any case I'm keen to get any feedback I can before I do
put much more time into this...
Thanks!
Ross.