My memory is incomplete but here's some comments. It might help if you
could clarify why you are asking.
In V19 I refactored your ASIO buffer size adaption algorithm to a common
module (pa_process) to be used wherever it is needed. But obviously it
comes down to individual host APIs whether buffer adaption is needed, or
whether to use it. As you will recall, that's a synchronous algorithm,
it doesn't explicitly use a FIFO, just a minimum size "left-over-samples
For PA/CoreAudio there is a ring buffer if PA needs to fuse separate
input and output streams into a full duplex stream.
There is a whole bunch of information about buffering in PortAudio here:
In particular the "Specific Host API Implementations" section gives some
details for individual host APIs, although the data is incomplete.
Note that separate to buffer size adaption, there is also the issue of
native IO buffer size selection. That is also discussed at the above link.
Regarding sample rate conversion: In general PA doesn't perform sample
rate conversion itself, but the OS may use SRC behind the scenes to
fulfill PA's sample rate request. In the case of PA/CoreAudio I believe
that PA manually instantiates a system AU that performs the SRC. In
general, for other host APIs, the the model is basic: "ask the native
API for a stream with a particular sample rate, return an error if the
native API can't supply it."
Post by Stéphane Letz
What is the current situation in PortAudio regarding buffer size and sample rate adaptation, between the real hardware buffer size and SR, and the one used at user code level (so in the user code provided callback) ?
Is there any buffer size code adaptation? Intermediated ring buffer or fifo ? What about different SR ? And how does these questions are solved on each supported OS?