Discussion:
New v19 code for Mac OS X
(too old to reply)
Dominic Mazzoni
2005-03-30 03:23:01 UTC
Permalink
Hello Phil, Ross, Richard, Stephane, Greg, Darren, and everyone else,

For over a year I've been meaning to put some serious time into the v19
version of pa_mac_core, especially as I've spent a lot of time fiddling
with CoreAudio with several different audio devices. Finally, I found
the time, and over the last couple of weeks I rewrote pa_mac_core from
scratch. Of course I borrowed lots of code from previous versions (so,
many thanks to Phil, Darren, Gord, Stephane, Greg, and any others who
may have contributed code), but I decided it would be worth it to start
with pa_skeleton and redesign it from scratch. Among other
differences, the new version uses the PortAudio BufferProcessor, and it
has far more sophisticated code for opening the audio device that
should support everything from built-in devices to high-end
USB/Firewire audio devices quite well.

If anyone wants to try it out, here's the code. More info on how well
it works so far is below.

http://spaghetticode.org/portaudio/2005-03-29/pa_mac_core.c
http://spaghetticode.org/portaudio/2005-03-29/pa_mac_core.h

First of all, here are the tests that work! I tried most of them with
at least a couple of different devices (including different
input/output pairs).

patest_buffer
patest_clip
patest_dither
patest_hang
patest_latency
patest_leftright
patest_longsine
patest_many
patest_maxsines
patest_multi_sine
patest_pink
patest_record
patest_ringmix
patest_saw
patest_sine
patest_sine8
patest_sine_formats
patest_sine_time
patest_start_stop
patest_sync
patest_toomanysines
patest_underflow
patest_wire (one caveat, see below)

Here are the tests that don't work, and the error messages I get. It
appears that perhaps I just have one main bug, relating to what value I
return from IsStreamStopped and IsStreamActive under different
circumstances. I would appreciate it if someone could explain to me in
greater detail how this is supposed to work. I'm clearly
misinterpreting the docs.

patest_callbackstop
An error occured while using the portaudio stream
Error number: -9983
Error message: Stream is stopped

patest_in_overflow
Test failed, no input overflows detected under stress."

patest_out_underflow
Test failed, no output underflows detected under stress.

patest_prime
An error occured while using the portaudio stream
Error number: -9983
Error message: Stream is stopped

patest_read_record
(What is it supposed to do? It exits too quickly...)

patest_stop
An error occured while using the portaudio stream
Error number: -9983
Error message: Stream is stopped
Test of MODE_FINISH failed!

patest_underflow
SleepTime = 91
Try to stop stream.
An error occured while using the portaudio stream
Error number: -9983
Error message: Stream is stopped

patest_write_sine
(It exits too quickly...)

Besides the broken tests above, there are three more things that I am
aware of that need to be fixed:

* Need to handle the case where the Mac device has a different number
of channels than the user requested. For example, some mic inputs only
support 1 channel, and the main output device often only supports 2
channels. I think this would be easier to handle within pa_mac_core
with some interleaving/de-interleaving code, rather than using an
AudioConverter, as v18 did. Unfortunately it's a little bit tricky
because on the Mac, multiple channels can be split across multiple
multi-channel buffers - for example, 4 channels can arrive as 2
buffers, each with 2 channels. Worse, each buffer can either be
interleaved or noninterleaved. It would be nice if BufferProcessor
helped with this, don't you think?

Currently the code runs, but doesn't do the interleaving. So calling
patest_wire (which is mono) on a device that only supports stereo, for
example, results in a distorted, but barely recognizable, sound.

* Blocking I/O still needs to be implemented, I think. I haven't
written any code for ReadStream / WriteStream, etc. - am I supposed to?
If so, how??? I noticed that other supposedly "complete" v19
implementations leave these functions blank, too...

* I should really add listeners for important events, and if the stream
format changes, either adapt to it, or stop the stream.

For those that are interested, I implemented a platform-specific
selector that allows the client program to have far greater control
over what happens when they open an audio device (and when they call
IsFormatSupported).

The issue is that if you change an audio device's settings, this
usually disrupts other programs that are also playing audio. Some
programs want to play nice, and just open the audio device however it's
already set up. That will allow you, for example, to play some sound
effects while iTunes is still playing in the background. Other
programs want to play nice, but they'd strongly prefer to open the
device rather than not at all, so if it's necessary to modify the
device in order to make it usable, that's okay - otherwise play nice.
Yet other programs want to modify the device if it would get them
better quality (more channels, higher sample rate), but otherwise are
happy to play nice. Finally, some programs actually would prefer to
have exclusive access to the device and kick other programs off for the
duration!

All of these modes of operation, and several shades of gray in-between,
are supported, using the following new API:

typedef enum
{
paModifyToMatchMinimumReqs = 1,
paModifyToMatchLooseReqs = 2, /* Default */
paModifyToMatchStrictReqs = 4,
paModifyIfHelpful = 8,
paRequireAtLeastNumChannels = 16, /* Default */
paRequireExactNumChannels = 32,
paRequireAtLeastSampleRate = 64, /* Default */
paRequireExactSampleRate = 128,
paRequireAtLeastSampleFormat = 256,
paRequireExactSampleFormat = 512,
paExclusiveAccess = 1024
} PaMacCore_DeviceAccessFlags;

void PaMacCore_SetDeviceAccessFlags(PaMacCore_DeviceAccessFlags
newFlags);

All of the flags are fully documented in pa_mac_core.h - please check
them out and tell me what you think. I tried hard to come up with a
smaller set of flags that were just as expressive, but couldn't come up
with anything I was happy with.

I wanted to ask about PortMixer, too, but I'll put that in a separate
email, so that we can keep the dicussion threads separate.

Regards,
Dominic
Stéphane Letz
2005-03-30 03:49:01 UTC
Permalink
Post by Dominic Mazzoni
Hello Phil, Ross, Richard, Stephane, Greg, Darren, and everyone else,
For over a year I've been meaning to put some serious time into the=20
v19 version of pa_mac_core, especially as I've spent a lot of time=20
fiddling with CoreAudio with several different audio devices. =20
Finally, I found the time, and over the last couple of weeks I rewrote=20=
pa_mac_core from scratch. Of course I borrowed lots of code from=20
previous versions (so, many thanks to Phil, Darren, Gord, Stephane,=20
Greg, and any others who may have contributed code), but I decided it=20=
would be worth it to start with pa_skeleton and redesign it from=20
scratch. Among other differences, the new version uses the PortAudio=20=
BufferProcessor, and it has far more sophisticated code for opening=20
the audio device that should support everything from built-in devices=20=
to high-end USB/Firewire audio devices quite well.
If anyone wants to try it out, here's the code. More info on how well=20=
it works so far is below.
http://spaghetticode.org/portaudio/2005-03-29/pa_mac_core.c
http://spaghetticode.org/portaudio/2005-03-29/pa_mac_core.h
First of all, here are the tests that work! I tried most of them with=20=
at least a couple of different devices (including different=20
input/output pairs).
patest_buffer
patest_clip
patest_dither
patest_hang
patest_latency
patest_leftright
patest_longsine
patest_many
patest_maxsines
patest_multi_sine
patest_pink
patest_record
patest_ringmix
patest_saw
patest_sine
patest_sine8
patest_sine_formats
patest_sine_time
patest_start_stop
patest_sync
patest_toomanysines
patest_underflow
patest_wire (one caveat, see below)
Here are the tests that don't work, and the error messages I get. It=20=
appears that perhaps I just have one main bug, relating to what value=20=
I return from IsStreamStopped and IsStreamActive under different=20
circumstances. I would appreciate it if someone could explain to me=20=
in greater detail how this is supposed to work. I'm clearly=20
misinterpreting the docs.
patest_callbackstop
An error occured while using the portaudio stream
Error number: -9983
Error message: Stream is stopped
patest_in_overflow
Test failed, no input overflows detected under stress."
patest_out_underflow
Test failed, no output underflows detected under stress.
patest_prime
An error occured while using the portaudio stream
Error number: -9983
Error message: Stream is stopped
patest_read_record
(What is it supposed to do? It exits too quickly...)
patest_stop
An error occured while using the portaudio stream
Error number: -9983
Error message: Stream is stopped
Test of MODE_FINISH failed!
patest_underflow
SleepTime =3D 91
Try to stop stream.
An error occured while using the portaudio stream
Error number: -9983
Error message: Stream is stopped
patest_write_sine
(It exits too quickly...)
Besides the broken tests above, there are three more things that I am=20=
* Need to handle the case where the Mac device has a different number=20=
of channels than the user requested. For example, some mic inputs=20
only support 1 channel, and the main output device often only supports=20=
2 channels. I think this would be easier to handle within pa_mac_core=20=
with some interleaving/de-interleaving code, rather than using an=20
AudioConverter, as v18 did. Unfortunately it's a little bit tricky=20
because on the Mac, multiple channels can be split across multiple=20
multi-channel buffers - for example, 4 channels can arrive as 2=20
buffers, each with 2 channels. Worse, each buffer can either be=20
interleaved or noninterleaved. It would be nice if BufferProcessor=20
helped with this, don't you think?
Currently the code runs, but doesn't do the interleaving. So=20
calling patest_wire (which is mono) on a device that only supports=20
stereo, for example, results in a distorted, but barely recognizable,=20=
sound.
* Blocking I/O still needs to be implemented, I think. I haven't=20
written any code for ReadStream / WriteStream, etc. - am I supposed=20
to? If so, how??? I noticed that other supposedly "complete" v19=20
implementations leave these functions blank, too...
* I should really add listeners for important events, and if the=20
stream format changes, either adapt to it, or stop the stream.
For those that are interested, I implemented a platform-specific=20
selector that allows the client program to have far greater control=20
over what happens when they open an audio device (and when they call=20=
IsFormatSupported).
The issue is that if you change an audio device's settings, this=20
usually disrupts other programs that are also playing audio. Some=20
programs want to play nice, and just open the audio device however=20
it's already set up. That will allow you, for example, to play some=20=
sound effects while iTunes is still playing in the background. Other=20=
programs want to play nice, but they'd strongly prefer to open the=20
device rather than not at all, so if it's necessary to modify the=20
device in order to make it usable, that's okay - otherwise play nice. =20=
Yet other programs want to modify the device if it would get them=20
better quality (more channels, higher sample rate), but otherwise are=20=
happy to play nice. Finally, some programs actually would prefer to=20=
have exclusive access to the device and kick other programs off for=20
the duration!
All of these modes of operation, and several shades of gray=20
typedef enum
{
paModifyToMatchMinimumReqs =3D 1,
paModifyToMatchLooseReqs =3D 2, /* Default */
paModifyToMatchStrictReqs =3D 4,
paModifyIfHelpful =3D 8,
paRequireAtLeastNumChannels =3D 16, /* Default */
paRequireExactNumChannels =3D 32,
paRequireAtLeastSampleRate =3D 64, /* Default */
paRequireExactSampleRate =3D 128,
paRequireAtLeastSampleFormat =3D 256,
paRequireExactSampleFormat =3D 512,
paExclusiveAccess =3D 1024
} PaMacCore_DeviceAccessFlags;
void PaMacCore_SetDeviceAccessFlags(PaMacCore_DeviceAccessFlags=20
newFlags);
All of the flags are fully documented in pa_mac_core.h - please check=20=
them out and tell me what you think. I tried hard to come up with a=20=
smaller set of flags that were just as expressive, but couldn't come=20=
up with anything I was happy with.
I wanted to ask about PortMixer, too, but I'll put that in a separate=20=
email, so that we can keep the dicussion threads separate.
Regards,
Dominic
Hi Dominic,

Glad to see that you've done this work!

My latest use of PortAudio was to develop a driver needed inside the=20
jackosx project (http://www.jackosx.com) At that time I was still using=20=

portaudio V18 implementation. Because the V18 version had too much=20
problems (some additionnal latency, and specifc issue with G5) I=20
decided to re-implement everything directly using the coreaudio APi and=20=

more specifically the AUHAL Audio unit. But my new driver is not=20
general enough because it does not handle some USB interfaces that=20
appear as 2 separate Audio devices for CoreAudio.

Before I spend time understanting the new code, I would be interested=20
to have answers to the following questions:

- is the new implementation using the AUHAL Audio unit ? (that does all=20=

format adaptation, interleaving/de-interleaving in a probably optimal=20
way using AudioConverter internally.)

- in the new implementation able to handle USB interfaces that appear=20
as 2 separate Audio devices for CoreAudio. Is is still done using an=20
intermediate ring buffer? What about possible sample rate drift problem=20=

between the 2 devices?

Thanks

Best Regards

Stephane letz
Richard Dobson
2005-03-30 05:57:02 UTC
Permalink
I have just built patest_sine with these files, but I still fall at the first
hurdle (even when the internal speakers are the default):

device_count = 0
default_device = -1

It seems like none of the CoreAUdio structures is being initialized. I assume I
have a bad Xcode project, but it is not obvious to me what is wrong with it: I
have PA_USE_COREAUDIO and PA_BIGENDIAN defined, for example.

I could really do with either a working configure, makefile, or Xcode project to
get going with - does anyone have such a thing they can post, or post a link to?

Richard Dobson
Post by Dominic Mazzoni
Hello Phil, Ross, Richard, Stephane, Greg, Darren, and everyone else,
For over a year I've been meaning to put some serious time into the
v19 version of pa_mac_core, especially as I've spent a lot of time
fiddling with CoreAudio with several different audio devices.
pa_mac_core from scratch.
....
Dominic Mazzoni
2005-03-30 12:00:01 UTC
Permalink
Post by Stéphane Letz
Hi Dominic,
Glad to see that you've done this work!
My latest use of PortAudio was to develop a driver needed inside the=20=
jackosx project (http://www.jackosx.com) At that time I was still=20
using portaudio V18 implementation. Because the V18 version had too=20
much problems (some additionnal latency, and specifc issue with G5) I=20=
decided to re-implement everything directly using the coreaudio APi=20
and more specifically the AUHAL Audio unit. But my new driver is not=20=
general enough because it does not handle some USB interfaces that=20
appear as 2 separate Audio devices for CoreAudio.
Before I spend time understanting the new code, I would be interested=20=
- is the new implementation using the AUHAL Audio unit ? (that does=20
all format adaptation, interleaving/de-interleaving in a probably=20
optimal way using AudioConverter internally.)
No, I implemented it using the HAL directly, not AUHAL. The main=20
reason is because I needed the ability to modify the audio device if=20
necessary. I think that the AUHAL is a great interface if you always=20
want to "play nice" and open the device as-is, but it doesn't help at=20
all if you want to change it's settings - but please correct me if I'm=20=

wrong. Note that the device-opening code is 75% or more of the work. =20=

The IOProc and stream starting/stopping is a tiny amount of code in=20
comparison, and could be easily rewritten to use AUHAL, and/or=20
AudioConverters.

Now that I've slept on it, I'm thinking that perhaps AudioConverters=20
are the best way to handle the channel issue. I'm pretty sure that=20
they wouldn't add any latency in this case.
Post by Stéphane Letz
- in the new implementation able to handle USB interfaces that appear=20=
as 2 separate Audio devices for CoreAudio. Is is still done using an=20=
intermediate ring buffer? What about possible sample rate drift=20
problem between the 2 devices?
It works by using the BufferProcessor, which is part of pa_common in=20
PortAudio v19. My understanding is that the BufferProcessor will=20
automatically handle sample drift by adding additional buffering as=20
necessary. Perhaps someone else can explain better how it works=20
internally.

Regards,
Dominic
Post by Stéphane Letz
Thanks
Best Regards
Stephane letz
_______________________________________________
Portaudio mailing list
http://music.columbia.edu/mailman/listinfo/portaudio
Stéphane LETZ
2005-03-30 14:46:01 UTC
Permalink
Post by Stéphane Letz
Hi Dominic,
Glad to see that you've done this work!
My latest use of PortAudio was to develop a driver needed inside the =20=
jackosx project (http://www.jackosx.com) At that time I was still =20
using portaudio V18 implementation. Because the V18 version had too =20=
much problems (some additionnal latency, and specifc issue with G5) I =
=20
Post by Stéphane Letz
decided to re-implement everything directly using the coreaudio APi =20=
and more specifically the AUHAL Audio unit. But my new driver is not =20=
general enough because it does not handle some USB interfaces that =20=
appear as 2 separate Audio devices for CoreAudio.
Before I spend time understanting the new code, I would be interested =
=20
Post by Stéphane Letz
- is the new implementation using the AUHAL Audio unit ? (that does =20=
all format adaptation, interleaving/de-interleaving in a probably =20
optimal way using AudioConverter internally.)
No, I implemented it using the HAL directly, not AUHAL. The main =20
reason is because I needed the ability to modify the audio device if =20=
necessary. I think that the AUHAL is a great interface if you always =20=
want to "play nice" and open the device as-is, but it doesn't help at =20=
all if you want to change it's settings - but please correct me if I'm =
=20
wrong.
It is perfectly possible to modify the audio device used by the AUHAL: =20=

sample rate, buffer size... you can actually access the audio device =20
"wrapped"' by the AUHAL like you would in a direct access.

Look at =20
http://cvs.sourceforge.net/viewcvs.py/jackit/jack/drivers/coreaudio/=20
coreaudio_driver.c?rev=3D1.17&view=3Dmarkup the
coreaudio_driver_new function.
Note that the device-opening code is 75% or more of the work. The =20
IOProc and stream starting/stopping is a tiny amount of code in =20
comparison, and could be easily rewritten to use AUHAL, and/or =20
AudioConverters.
AUHAL already contains AudioConverters internally: and using =20
AudioConverters is definitively more efficient that any code we could =20=

write do to deinterleaving/interleving ourself.
Now that I've slept on it, I'm thinking that perhaps AudioConverters =20=
are the best way to handle the channel issue. I'm pretty sure that =20=
they wouldn't add any latency in this case.
This is what the AUHAL does....
Post by Stéphane Letz
- in the new implementation able to handle USB interfaces that appear =
=20
Post by Stéphane Letz
as 2 separate Audio devices for CoreAudio. Is is still done using an =20=
intermediate ring buffer? What about possible sample rate drift =20
problem between the 2 devices?
It works by using the BufferProcessor, which is part of pa_common in =20=
PortAudio v19. My understanding is that the BufferProcessor will =20
automatically handle sample drift by adding additional buffering as =20=
necessary.
I'm not sure about that... I hope Ross can explain.
Perhaps someone else can explain better how it works internally.
Regards,
Dominic
Using the AUHAL would probabky require to use 2 AUHAL for a USB like 2 =20=

coreaudio device interface.

There is maybe another possibilty : Tiger is supposed to handle =20
"multi-devices" interfaces with a new "aggregate" concept. I understood =20=

this would allow to handle several real physical interfaces as an =20
unique coreaudio device, thus simplifying things. But I'm not sure it =20=

will handle a 2 devices USB like interface a unique device... we have =20=

to wait.

Stephane
Dominic Mazzoni
2005-03-31 20:52:01 UTC
Permalink
Post by Stéphane LETZ
Post by Dominic Mazzoni
No, I implemented it using the HAL directly, not AUHAL. The main
reason is because I needed the ability to modify the audio device if
necessary. I think that the AUHAL is a great interface if you always
want to "play nice" and open the device as-is, but it doesn't help at
all if you want to change it's settings - but please correct me if
I'm wrong.
sample rate, buffer size... you can actually access the audio device
"wrapped"' by the AUHAL like you would in a direct access.
Look at
http://cvs.sourceforge.net/viewcvs.py/jackit/jack/drivers/coreaudio/
coreaudio_driver.c?rev=1.17&view=markup the
coreaudio_driver_new function.
Post by Dominic Mazzoni
Note that the device-opening code is 75% or more of the work. The
IOProc and stream starting/stopping is a tiny amount of code in
comparison, and could be easily rewritten to use AUHAL, and/or
AudioConverters.
AUHAL already contains AudioConverters internally: and using
AudioConverters is definitively more efficient that any code we could
write do to deinterleaving/interleving ourself.
Post by Dominic Mazzoni
Now that I've slept on it, I'm thinking that perhaps AudioConverters
are the best way to handle the channel issue. I'm pretty sure that
they wouldn't add any latency in this case.
This is what the AUHAL does....
Neat, thanks, I wasn't aware of this before. I guess when you were said
AUHAL, at first I thought you meant that you used it _instead_ of HAL.
But you're actually making lots of HAL calls, you're just using AUHAL as a
wrapper so that it can help with device conversion. This makes a lot of
sense.

Is there any other difference between AUHAL and HAL+AudioConverter? If
they're essentially the same, then using AUHAL makes sense to me; if it
handles the conversion then that's one fewer thing to worry about. I just
want to make sure it's not introducing any latency or anything else...

I'll address the buffering issue in the next email...

- Dominic
Stéphane Letz
2005-04-01 02:59:01 UTC
Permalink
No, I implemented it using the HAL directly, not AUHAL. The main =20=
reason is because I needed the ability to modify the audio device if=20=
necessary. I think that the AUHAL is a great interface if you=20
always want to "play nice" and open the device as-is, but it=20
doesn't help at all if you want to change it's settings - but=20
please correct me if I'm wrong.
It is perfectly possible to modify the audio device used by the=20
AUHAL: sample rate, buffer size... you can actually access the audio=20=
device "wrapped"' by the AUHAL like you would in a direct access.
Look at =20
http://cvs.sourceforge.net/viewcvs.py/jackit/jack/drivers/coreaudio/=20=
coreaudio_driver.c?rev=3D1.17&view=3Dmarkup the
coreaudio_driver_new function.
Note that the device-opening code is 75% or more of the work. The =20=
IOProc and stream starting/stopping is a tiny amount of code in =20
comparison, and could be easily rewritten to use AUHAL, and/or =20
AudioConverters.
AUHAL already contains AudioConverters internally: and using =20
AudioConverters is definitively more efficient that any code we could=20=
write do to deinterleaving/interleving ourself.
Now that I've slept on it, I'm thinking that perhaps AudioConverters=20=
are the best way to handle the channel issue. I'm pretty sure that=20=
they wouldn't add any latency in this case.
This is what the AUHAL does....
Neat, thanks, I wasn't aware of this before. I guess when you were=20
said AUHAL, at first I thought you meant that you used it _instead_ of=20=
HAL. But you're actually making lots of HAL calls, you're just using=20=
AUHAL as a wrapper so that it can help with device conversion. This=20=
makes a lot of sense.
Yes.
Is there any other difference between AUHAL and HAL+AudioConverter? =20=
If they're essentially the same, then using AUHAL makes sense to me;=20=
if it handles the conversion then that's one fewer thing to worry=20
about. I just want to make sure it's not introducing any latency or=20=
anything else...
AUHAL is surely developed with HAL and Audio converters. The only thing=20=

is that it forces to have an additional buffer copy (i was not able to=20=

avoid it in the given jack coreaudio driver example....) when passing=20=

the data between the application and the AUHAL, when this buffer copy=20
can be avoided with directly using HAL+AudioConverter.

But the benefit of using the AUHAL is probably more that this=20
additional buffer copy problem....

Stephane
Ross Bencina
2005-03-31 08:11:01 UTC
Permalink
Post by Dominic Mazzoni
- in the new implementation able to handle USB interfaces that appear as
2 separate Audio devices for CoreAudio. Is is still done using an
intermediate ring buffer? What about possible sample rate drift problem
between the 2 devices?
It works by using the BufferProcessor, which is part of pa_common in
PortAudio v19. My understanding is that the BufferProcessor will
automatically handle sample drift by adding additional buffering as
necessary. Perhaps someone else can explain better how it works
internally.
I'm not how you came to that understanding because the buffer processor is
synchronous to the number of input and output samples which you provide it.
No adaption for sample rate slew is included, that's up to your code.

Ross.
Dominic Mazzoni
2005-03-31 21:05:01 UTC
Permalink
Post by Ross Bencina
Post by Dominic Mazzoni
Post by Stéphane Letz
- in the new implementation able to handle USB interfaces that appear
as 2 separate Audio devices for CoreAudio. Is is still done using an
intermediate ring buffer? What about possible sample rate drift
problem between the 2 devices?
It works by using the BufferProcessor, which is part of pa_common in
PortAudio v19. My understanding is that the BufferProcessor will
automatically handle sample drift by adding additional buffering as
necessary. Perhaps someone else can explain better how it works
internally.
I'm not how you came to that understanding because the buffer processor
is synchronous to the number of input and output samples which you
provide it. No adaption for sample rate slew is included, that's up to
your code.
Well, here was the part of the documentation that made me think so:

"One of the important capabilities provided by the buffer processor is
the ability to adapt between user and host buffer sizes of different
lengths with minimum latency. Although this task is relatively easy to
perform when the host buffer size is an integer multiple of the user
buffer size, the problem is more complicated when this is not the case -
especially for full-duplex callback streams. Where necessary the adaption
is implemented by internally buffering some input and/or output data."

Based on that, I guess I assumed that if I constructed a BufferProcessor
with paUtilUnknownHostBufferSize, then it would handle arbitrary (or quite
large) amounts of buffering, if necessary.

If I'm wrong, that's fine. I guess I'll have to add the ring buffer code
back. But could you explain to me why the case of drifting is
significantly different from the case of an unknown host buffer size?
Even if the current implementation can't quite handle that, I still think
that the BufferProcessor is a great abstraction and that it could be
modified to handle this case if necessary.

Stephane, it looks like you wrote the original code that BufferProcessor
is based on. Sorry, I didn't realize that earlier. :)

- Dominic
Stéphane Letz
2005-04-01 03:07:03 UTC
Permalink
Post by Dominic Mazzoni
- in the new implementation able to handle USB interfaces that=20
appear as 2 separate Audio devices for CoreAudio. Is is still done=20=
using an intermediate ring buffer? What about possible sample rate=20=
drift problem between the 2 devices?
It works by using the BufferProcessor, which is part of pa_common in=20=
PortAudio v19. My understanding is that the BufferProcessor will=20
automatically handle sample drift by adding additional buffering as=20=
necessary. Perhaps someone else can explain better how it works=20
internally.
I'm not how you came to that understanding because the buffer=20
processor is synchronous to the number of input and output samples=20
which you provide it. No adaption for sample rate slew is included,=20=
that's up to your code.
"One of the important capabilities provided by the buffer processor is
the ability to adapt between user and host buffer sizes of different
lengths with minimum latency. Although this task is relatively easy to
perform when the host buffer size is an integer multiple of the user
buffer size, the problem is more complicated when this is not the case=20=
-
especially for full-duplex callback streams. Where necessary the=20
adaption
is implemented by internally buffering some input and/or output data."
Based on that, I guess I assumed that if I constructed a=20
BufferProcessor with paUtilUnknownHostBufferSize, then it would handle=20=
arbitrary (or quite large) amounts of buffering, if necessary.
If I'm wrong, that's fine. I guess I'll have to add the ring buffer=20=
code back. But could you explain to me why the case of drifting is=20
significantly different from the case of an unknown host buffer size?=20=
Even if the current implementation can't quite handle that, I still=20
think that the BufferProcessor is a great abstraction and that it=20
could be modified to handle this case if necessary.
Stephane, it looks like you wrote the original code that=20
BufferProcessor is based on. Sorry, I didn't realize that earlier. =20=
:)
I was not aware of this new (for me...) paUtilUnknownHostBufferSize=20
thing. The original idea of the buffer processor stuff was to guaranty=20=

minimum additional latency when arbitrary host and user buffer size are=20=

used. And I don't know if the buffer processor current code could be=20
extended to handle the drifting issue.... Ross can probably answer=20
this.

Stephane
Ross Bencina
2005-04-02 12:27:01 UTC
Permalink
I was not aware of this new (for me...) paUtilUnknownHostBufferSize thing.
The original idea of the buffer processor stuff was to guaranty minimum
additional latency when arbitrary host and user buffer size are used. And I
don't know if the buffer processor current code could be extended to handle
the drifting issue.... Ross can probably answer this.
The buffer processor is quite general purpose now (for better or worse) and
handles all of the adaption scenarios which i knew about when it was
written, including both simple and complex ones. But the current code is
based on the assumption that the same number of samples will be read and
written at each call (this is the sense in which it is synchronos), so even
with paUtilUnknownHostBufferSize it assumes that the supplied input and
output is the same length.

In a sense the Buffer Processor already handles the drifting issue in that
you can tell it that you don't have samples available and it will forward
the necessary information to the client according to which flags it set when
the stream was created. I currently believe that the per-host
implementation is in the best position to make decisions about when samples
need to be dropped due to slippage -- if you are using a circular buffer for
example you will need to adjust the circular buffer due to slippage, and
since I can think of at least three cases where this will be totally
different (MME, DirectSound, CoreAudio) I don't think the functionality
belongs to Buffer Processor.

It's a while since I thought about this stuff so if you have any better
ideas please tell me :-)

Best wishes

Ross.

Continue reading on narkive:
Loading...