Implementing SAPI training is relatively hard, and the documentation doesn’t really tell you what you need to know.
ISpRecognizer2::SetTrainingState switches the recognizer into or out of training mode.
When you go into training mode, all that really happens is that the recognizer gives the user a lot more leeway about recognitions. So if you’re trying to recognize a phrase, the engine will be a lot less strict about the recognition.
The engine doesn’t really do any adaptation until you leave training mode, and you have set the fAdaptFromTrainingData flag.
When the engine adapts, it scans the training audio stored under the profile data. It’s the training code’s responsibility to put new audio files where the engine can find it for adaptation.
These files also have to be labeled, so that the engine knows what was said.
So how do you do this? You need to use three of lesser-known SAPI APIs. In particular, you need to get the profile token using ISpRecognizer::GetObjectToken, and SpObjectToken::GetStorageFileName to properly locate the file.
Finally, you also need to use ISpTranscript to generate properly labeled audio files.
To put it all together, you need to do the following (pseudo-code):
Create an inproc recognizer & bind the appropriate audio input.
Ensure that you’re retaining the audio for your recognitions; you’ll need it later.
Create a grammar containing the text to train.
Set the grammar’s state to pause the recognizer when a recognition occurs. (This helps with training from an audio file, as well.)
When a recognition occurs:
Get the recognized text and the retained audio (using ISpRecoResult::GetAudio).
QI the audio for IStream.
Create a stream object using CoCreateInstance(CLSID_SpStream).
Copy the retained audio into the stream object (using IStream::CopyTo) .
Close the stream, update the grammar for the next utterance, resume the recognizer, and repeat until you’re out of training text.