We have enabled the extended voice transcription integration. However, it appears there is no way to define a custom trained model. Is this something that will become possible?
It looks like you can do this for the Microsoft Azure Cognitive Services Speech To Text integration, although this states it's only supported for bot flows.
The quality of transcription is fairly good, but we would get much better accuracy if we could define our own azure model to be used.
EVTS (Extended Voice Transcription Service) is only for the speech analytics and actually it is our Azure account. So, the model is fixed. The one mentioned for bots let's you bring your own Azure account and so that works of course. I believe next year has this on the roadmap, but would need to check with the WEM team
Both features use Azure transcription, but for different purposes and different config. So, it is not one integration to serve Speech Analytics AND bots, it is one for each, but both use Azure and can't be mixed.