How I created automated tests that use WebRTC softphone

Hey,

I just thought I'd share a blog article I posted today about how I wrote (and continue to write) automated tests that simulate a call between and agent and customer in order to test a feature in the agent's UI.

Although I've created tests at varying levels I wanted a few end-to-end tests to give me confidence that I am correctly understanding the structure and lifecycle of:

  1. Live transcript events
  2. Agent's active conversations (and attributes of conversations)
  3. Embedded Framework events

All of which my feature uses.

Unfortunately though these are unlikely to become CI/CD tests since they rely on browser-based auth of the user.

Q. Is there a better way to stream audio in/out of the WebRTC softphone?

In my solution I'm using Puppeteer to override the behaviour of the browser's MediaDevices API so I can intercept the audio sent to and from the agent. I've always wondered if there is a better way?

I've noticed the 'WebRTC Media Helper' toggle in the Phone Management area before, and am sure I saw reference to something similar when logging out the embedded framework object. But can't find docs referencing JS SDK APIs

Sorry if I posted this in the wrong section

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.