So I'm starting to mess with the WebRTC SDK for a project we're planning, and I wanted to ask... it's probably not possible, but I figured no harm in asking... but is there an alternative to the WebRTC SDK for softphone-like functionality?
Let me explain: A lot of our users are running a screen reader called "Job Access With Speech" ("JAWS" for short) so supporting JAWS is a must-have. But here's the thing: The Genesys Cloud desktop app (which I'm guessing is some kind of Electron thing, or maybe using Chromium Embedded?) is a bit of a resource hog (so are Electron/NW apps in general). So is JAWS. So are browsers. So of course, we've seen performance issues with users who need to run all 3 of these at once - a problem I would like to avoid if possible (while we're still in the design stages, before I start coding). Now I'm a weirdo (lol) - I'm one of those old-school command-line-loving nerds who believes that "not everything has to be an Electron app". The app we're building will not just be a softphone, but that is an important part of it.
Anyway, I noticed Genesys has SDKs for Python and .NET, so I'm thinking maybe I could use one of those from a basic desktop app (not a web thing). How would that work (or I guess better question, would that work? lol)? I kind of wonder how it works in general - things like interfacing with the user's headset/mic etc. Of course I have similar questions with the WebRTC SDK, but I bet it's WebSockets (not straight REST/HTTP) under the hood. So if you're not running in a browser, how would "listening for events" even be possible? If at all...
Don't get me wrong - I'm sure the powers that be will want it to look pretty, and these days pretty beats performant, rule of cool and all that. It'll almost definitely end up as an NW or Electron app, but it would be great to be able to offer an alternative... if there is one. No biggy if not, this is more out of curiosity than anything.