How channels can be used among multiple pods in a scalable deployment (Kubernetes)

Hi Team,

We are using the Genesys cloud environment and enabled transcript for the call conversations. For subscribing and collecting the transcripts, we built an application with the help of Genesys SDK. We are planning to containerise this application and deploy it in Kubernetes cluster.

From the Genesys documents, we understand that, only upto 20 channels are supported per user/app combination and for each channel's webSocket connection is limited to 1,000 topics. So we could able to open upto 20 web socket connections associated with these 20 channels. So, Here the query is,

How the web socket connections can be handled among different pods for supporting a highly scalable deployment.

Suppose, If we assign one channel to each pod, we can starts upto 20 pods in the Kubernetes cluster. So, each pod can handle one web socket connection associated with a channel and upto 1000 subscriptions. If we want to scale the application further by increasing the number of pods, say, upto 100 pods, how can we distribute or use one channel or web socket connection among multiple pods (in this example, in 5 pods).

Any guidelines on such cases ? Please help.

We are using env - [https://apps.usw2.pure.cloud/ ]

WebSocket notifications are intended for single-user client-side app use cases. While you can use it in other contexts, you're going to run into architecture problems, as you've found out. The correct technology for your use case is EventBridge. https://developer.genesys.cloud/notificationsalerts/notifications/event-bridge

Thanks for the details.

If we use another cloud environment ( other than AWS), how can we achieve this ?

In such cases, does Genesys provides any other kind of integration layer OR should we implement this integration additionally ?

Please help.

You will still need to use AWS to implement the EventBridge integration to receive the events. Once the event has been delivered to EventBridge, what you do with it is entirely up to you. Assuming you don't want to move your whole project into AWS, the conceptual approach would be make the EventBridge integration push the data to your service. One option could be having EventBridge run a lambda to post the event to a REST API endpoint exposed by your service. You can then do whatever load balancing or routing is required within your cluster.

Hi Shamas,

Time is absolutely right in that currently we only support EventBridge. However, we do have a blueprint that shows how to setup a single Lambda in an AWS account to proxy events over to another cloud provider (in this case an Azure EventGrid). It essentially shows exactly what Tim is talking about. The blueprint be found here.

Thanks,
John Carnell
Director, Developer Engagement

Thanks @tim.smith , @John_Carnell for the detailed information.

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.