Dmitry K.
2017-01-18 10:08:15 UTC
Hi Laurent,
So, the attempt to open device in Shared mode with 22050 Hz frequency will fail if device reports 48000 Hz. It is correct WASAPI's behavior.
Best regards,
Dmitry.Â
On Wed, 7 Dec, 2016 at 2:32 PM, Laurent Zanoni <***@acapela-group.com> wrote:
Â
To: portaudio list
Hello,
Â
in Win 10 (UWP), there is a few flags supposed to adjust the user format with the one from the driver.
(AUDCLNT_STREAMFLAGS_AUTOCONVERTPCM, SRC_DEFAULT_QUALITY), as well as the option eStreamOptionMatchFormat (for W10)
I'm trying to play 22050 samples (adapted from paex_sine_c++) in Wasapi shared mode, but it fails saying that the format is not supported.
Â
CreateAudioClient calls GetClosestFormat with 22050 as sample rate Then IAudioClientIsFormatSuppported fills the closes match with 48000.
Then it fails to validate the sample rate of course :/
Â
Isn't WASAPI Shared mode supposed to allow resampling with proper flags ?
Â
any hint ?
Â
BR,
Laurent
----- Original Message -----
From: Nocs ...<mailto:***@hotmail.com>
To: portaudio list<mailto:***@lists.columbia.edu> ; ***@mobileer.com<mailto:***@mobileer.com>
Sent: Monday, December 05, 2016 7:20 PM
Subject: Re: [Portaudio] A small guidance needed in WMME
Thanks for the response, yeap i managed to achieved it in blocked way and using also opus encoding which seems to make things easier cause of its compression to send it in small buffers.
By compining portaudio and opus decoding i think is very well suited way to use it for voip solutions
I havent test the transmition cause i am on the way of making it the next days but i think if things go not so well with vector buffers i will have to use
the ringbuffers as you metion and thanks for the tip about it
From: portaudio-***@lists.columbia.edu <portaudio-***@lists.columbia.edu> on behalf of Phil Burk <***@mobileer.com>
Sent: Monday, December 5, 2016 7:23:02 PM
To: portaudio list
Subject: Re: [Portaudio] A small guidance needed in WMME
Â
If you need low latency then you should probably use the callback API.
Just make sure you don't do much besides simple scaling and routing of signals in the callback.
If you need to send the data over a network, or to disk, then pipe it through a ringbuffer to another thread. That way you can avoid doing any networking in the audio callback.
You may not be able to use a bidirectional stream. In that case you may also need to use a ringbuffer to connect two unidirectional streams.
I think that with a combination of callbacks and ring buffers you can build whatever topology you need.
Phil Burk
On Thu, Dec 1, 2016 at 9:31 AM, Nocs ... <***@hotmail.com<mailto:***@hotmail.com>> wrote:
Hello to all,
I have lost myself in the translation of too many examples and tests and i am not sure what way should i choose to have a good and best choice so not to spend time uneccessarily.
I made a blocking and non blocking input to output but i cant understand which is best, after i see other tests with other ways i dont know if what i am doing is correct for my needs or not.
What i want to achieve is to be able to get the input from microphone, save it to a buffer and also play that buffer to the output at the same time.
It will be for a p2p chat service using connection between 2 pc`s for example so after i do my tests on the same pc i have to be able to switch the output to listen to the incoming buffer from others pc microphone sended buffer later on.
Â
Which test or combinations of tests, examples and tuts suite me well ?
Thanks in advance for you time
_______________________________________________
Portaudio mailing list
***@lists.columbia.edu<mailto:***@lists.columbia.edu>
https://lists.columbia.edu/mailman/listinfo/portaudio
_______________________________________________
Portaudio mailing list
***@lists.columbia.edu
https://lists.columbia.edu/mailman/listinfo/portaudio
 Isn't WASAPI Shared mode supposed to allow resampling with proper flags ?
Sorry it took long to respond to your question. WASAPI does not do any frame rate conversion, so in Shared mode your app must send data with the same frame rate (and resample it if needed) as reported by the audio device.So, the attempt to open device in Shared mode with 22050 Hz frequency will fail if device reports 48000 Hz. It is correct WASAPI's behavior.
Best regards,
Dmitry.Â
On Wed, 7 Dec, 2016 at 2:32 PM, Laurent Zanoni <***@acapela-group.com> wrote:
Â
To: portaudio list
Hello,
Â
in Win 10 (UWP), there is a few flags supposed to adjust the user format with the one from the driver.
(AUDCLNT_STREAMFLAGS_AUTOCONVERTPCM, SRC_DEFAULT_QUALITY), as well as the option eStreamOptionMatchFormat (for W10)
I'm trying to play 22050 samples (adapted from paex_sine_c++) in Wasapi shared mode, but it fails saying that the format is not supported.
Â
CreateAudioClient calls GetClosestFormat with 22050 as sample rate Then IAudioClientIsFormatSuppported fills the closes match with 48000.
Then it fails to validate the sample rate of course :/
Â
Isn't WASAPI Shared mode supposed to allow resampling with proper flags ?
Â
any hint ?
Â
BR,
Laurent
----- Original Message -----
From: Nocs ...<mailto:***@hotmail.com>
To: portaudio list<mailto:***@lists.columbia.edu> ; ***@mobileer.com<mailto:***@mobileer.com>
Sent: Monday, December 05, 2016 7:20 PM
Subject: Re: [Portaudio] A small guidance needed in WMME
Thanks for the response, yeap i managed to achieved it in blocked way and using also opus encoding which seems to make things easier cause of its compression to send it in small buffers.
By compining portaudio and opus decoding i think is very well suited way to use it for voip solutions
I havent test the transmition cause i am on the way of making it the next days but i think if things go not so well with vector buffers i will have to use
the ringbuffers as you metion and thanks for the tip about it
From: portaudio-***@lists.columbia.edu <portaudio-***@lists.columbia.edu> on behalf of Phil Burk <***@mobileer.com>
Sent: Monday, December 5, 2016 7:23:02 PM
To: portaudio list
Subject: Re: [Portaudio] A small guidance needed in WMME
Â
If you need low latency then you should probably use the callback API.
Just make sure you don't do much besides simple scaling and routing of signals in the callback.
If you need to send the data over a network, or to disk, then pipe it through a ringbuffer to another thread. That way you can avoid doing any networking in the audio callback.
You may not be able to use a bidirectional stream. In that case you may also need to use a ringbuffer to connect two unidirectional streams.
I think that with a combination of callbacks and ring buffers you can build whatever topology you need.
Phil Burk
On Thu, Dec 1, 2016 at 9:31 AM, Nocs ... <***@hotmail.com<mailto:***@hotmail.com>> wrote:
Hello to all,
I have lost myself in the translation of too many examples and tests and i am not sure what way should i choose to have a good and best choice so not to spend time uneccessarily.
I made a blocking and non blocking input to output but i cant understand which is best, after i see other tests with other ways i dont know if what i am doing is correct for my needs or not.
What i want to achieve is to be able to get the input from microphone, save it to a buffer and also play that buffer to the output at the same time.
It will be for a p2p chat service using connection between 2 pc`s for example so after i do my tests on the same pc i have to be able to switch the output to listen to the incoming buffer from others pc microphone sended buffer later on.
Â
Which test or combinations of tests, examples and tuts suite me well ?
Thanks in advance for you time
_______________________________________________
Portaudio mailing list
***@lists.columbia.edu<mailto:***@lists.columbia.edu>
https://lists.columbia.edu/mailman/listinfo/portaudio
_______________________________________________
Portaudio mailing list
***@lists.columbia.edu
https://lists.columbia.edu/mailman/listinfo/portaudio