Wow! My first post! I've been lurking here for quite some time and observing the assembler related discussions. Quite interesting stuff for the most part...

Well, my first question is about something called double buffering. I want my program to take the audio from the microphone and play it to the speakers. I'm using the low level audio functions, so I have to manage my audio data buffers, and keep a constant data buffer flow between the input and output. According to the stuff that I found on the web, I must do double-buffering. The only question is how? I can find no examples.

I tried doing something with threads (attachment). The proggie doesn't work well, uses a lot of processor time and consumes about 5 MB of memory. That's not very nice for an assembly proggie that is supposed to be lean and mean...

Here's the important question: Does anyone have any info or code (ASM, VB, C++, Pascal...) that shows you how to do double buffering the way it's supposed to be done?
Posted on 2002-02-07 23:26:18 by Lysic

just a brief description:


1. write a waveOutProc() procedure being called when the sound devices finishes playing a buffer; this is indicated by WOM_DONE message; inside this proc check for WOM_DONE and set an event or post a message (nothing other)

2. open the audio device and alloc two buffers, fill two WAVEHDR structures and do two times waveOutPrepareHeader() for each buffer

3. wait for set event (WaitForSingleObject() or catch the message posted in waveOutProc() and fill your buffer and call waveOutWrite()

3. call your waveOutProc() two times by yourself

now the playback is running and after one buffer is finished the callback procedure is being called and fills the buffer again and play and fills and plays ...

Bye Miracle
Posted on 2002-02-08 10:26:01 by miracle
I wouldn't understand this :(

Maybe there is someone who knows too but can explain better.

Bye Miracle
Posted on 2002-02-08 10:30:54 by miracle
Thanks, miracle. I actually know all the functions that I need to call, I just don't know how to maintain the timing that is required: record one buffer while another is playing, then switch buffers and repeat.

Currently, my proggie is continuously checking for the presence of the WHDR_DONE flag in the WAVEHDR structure to see if the waveOutWrite or waveInStart are done with the writing/reading. I think I'm gonna switch it to window messages or use a callback... don't quite remeber which of the two is the second way of finding out when the APIs are done with the buffer.

I also found some stuff about double buffering for video images. It's similar enough with audio, so maybe I can get some leads...

For those that wanna see what I've done up to this point...

BTW, has anyone noticed that if you write a message, select an attachment, click Preview, and then click Post, the atttachment does not show up in the Preview or the final post? That's why my first message has no attachment.
Posted on 2002-02-08 19:02:12 by Lysic
Hmmm... Not sure whats up, but i D/L'd your example, ran it, and it said something like "test ok" (dont remember exactly). I actually missed the message the first time around, so i chose the "start" again in the file menu.

Thats was when it brought my OS to its knees. Have to say im impressed, cause i havent managed to cripply my OS to that extent in over a year :) (( So i think you might have a "low level" api problem with Windows 98SE and a SoundBlaster Live Card. ))

Thought you should know..
Posted on 2002-02-10 01:23:55 by NaN
Thanks for trying it Nan! I hope you heard the clicks and pops...

Sorry for the crash though. I know it uses up 99% of CPU resources, but on my WinXP i always have control. The only instability I noticed occurs when you click enable audio transport under File, and then you decide to close the application with the x button on the titlebar. I think that if you click start audio transport, then end audio trasnport, then exit, it won't crash. Although I handle the thread termination when you click x on titlebar, i think there's not enough time for the threads to quit, before i send the program termination signal. A Sleep(1000) under the WM_CLOSE section in WndProc fixes that.

But Dope! I just realized that I also forgot to make sure the whole thing isn't running when you click Start Audio Transport! I guess that's what killed your Windows when you clicked Start the second time around without a stop in between the two Starts.

Thanks for the battle testing, Nan! Never thought of pressing Start 2 times in a row!
Posted on 2002-02-10 14:03:59 by Lysic
Its O.K.

I always manage to do it the hard way :)

Posted on 2002-02-10 18:20:19 by NaN
Hi again,

really good, you know this API :) that way we can talk about the real problems ;) and we'll be able to find a solution.

Timing the output of the audio device normally does not effect you, coz the device calls your callback and if your are fast enough to fill the buffers you dont have to do more things. The timimg on the audio card is very accurate. Playing 44100 * 2 * 2 bytes on stereo mode and 44.1KHz sampling rate will play exactly one second.

Watching your sound device while playing is a bit more tricky. Example: you want to figure out if you are in time with another thread (playing video or reading data from mic). That way timeGetTime() is a good solution.
You can store the current time when start playing and increment the variable after each buffer playback. Getting time via timeFetTime() now allows you to check if you are in time

Give a short explanation if this is what you're tryin to do.

Bye Miracle
Posted on 2002-02-11 04:06:29 by miracle
Well miracle, if you want to get the whole picture here its is:

I want to write a phone program. I wrote the TAPI part of the program such that I can dial numbers and so forth. I now need to write the sound part, such that once a connection is established through TAPI, I can listen and speak.

Where does this proggie fit in? I'm programming the sound mechanism separately such that I don't go insane dialing numbers. Once I get this to work - picking the sound from the microphone and sending it to the speakers, I can copy most of the code to the phone app and adapt it for the wave device(s) on the modem. Overall, this sound program is only a small test program, which explains why evil NaN :) crashed it so easily...

As far as the workings of the sound proggie: I understand that to do double buffering, you need at least 2 buffers. The sound program that I posted above uses 5. The idea is that while you record sound into the first buffer, and the buffer is getting filled, you play the contents of the second buffer, which has been filled previously with sound from the mic.

I still have to figure how to achieve this constant flow of buffers such that when you speak in the microphone, you hear your voice in the speakers. I know the API, it's just a question of timing. As you can see from the example above, I don't quite have the right implementation.

The solution that I now have in mind for switching these buffers and performing the necessary maintanance on them is activating a series of steps once a WIM_DONE message is received by one callback and another series when a WOM_DONE message is received by the other callback. So, if the callback that handles incoming sound gets a WIM_DONE and I know the first buffer was being filled, then I must do the following on the first buffer:

invoke waveInUnprepareHeader
invoke waveOutPrepareHeader
invoke waveOutWrite

If the callback that handles outcoming sound gets the WOM_DONE message, signaling that the buffer has finished playing, then I must take these steps:

invoke waveOutUnprepareHeader
invoke waveInPrepareHeader
invoke waveInAddBuffer

I didn't try it yet, so I don't know if it works... I have some doubts whether the preparing and unpreparing operations can be done "instantaneously" such that for example, when one buffer is done playing the other is immediately ready to go into playing. We'll have to see, whenever I get the time to implement it... not until this weekend probably.

Does this answer your questions, miracle? Lay your bets: Will it work? And feel free to suggest something different...
Posted on 2002-02-11 17:35:42 by Lysic
Hi Lysic,

now it seems quite clear to me. The ugly thing in it is to perform the synchronization between waveInAddBuffer() and waveOutWrite().
The calls you've described above are right IMHO.

Ok, here we go. Seeing the problem from a far point I'd prefer this.
solution #1: using double buffering for input and output and an additional large buffer between them (easier to understand but copy overhead)

solution #2: 5..6 single buffers toggled between filling, waiting and reading
(not as easy to understand but the faster one). But here is a trap. You have to waveInPrepareHeader and waveOutPrepareHeader before using each buffer. This may slow down.

Regarding the timing I'd test it without one first. Implement a function GetInputBuffer() which gives you always the next buffer for write (fill) access and read access GetOutputBuffer(). Then do four calls of waveInProc() and two calls of waveOutProc(). Increment a variable after each waveInAddBuffer() and decrement after each waveOutWrite() to keep track of the status of your buffers.

Assuming that input and output uses the same sampling rate and sample size I bet it will work (90% chance) :)

Bye Miracle
Posted on 2002-02-12 05:01:46 by miracle
Thanks, miracle

I don't quite understand your method of using a larger intermediary buffer. I'll try the other things that you recommended sometimes soon... hopefully.
Posted on 2002-02-12 08:45:50 by Lysic