Data Action - rate limits - token usage

Hello,

our customer plans to use quite many DataActions within the different Flows.

There will be two types of DataActions. One that executes against the Genesys Cloud API and one that will make requests against an external Web Service.
In Peak times we expect about

  • 500 Interactions per minute, for each interaction we expect 40 DataAction requests (20 against external Service, 20 GC API)

Thus

  • 10000 requests per minute - GC API
  • 10000 requests per minute - external Web Service

I know that in this case we need to have a close look the these rate limits

Auth API:

  • token.creation.rate.per.minute <- max 300

Platform API

  • client.credentials.token.rate.per.minute <- can be increased to 3000
  • org.app.user.rate.per.minute <- can be increased to 5000

I guess here we can mitigate the problem by adding additional OAuth Clients

  • token.rate.per.minute <- max 300

DataActions

  • requests.volume.max <-- should be OK as this Parameter can be increased to 25000

Right now one question is when is a token generated in an Architect Flow?

  • one token for each interaction?
  • one token for each interaction in each flow?
  • one token for each data Action per flow?
  • or are they cached and reused all the time till they expire?

In this scenario will we have problems with the limit set by token.rate.per.minute and token.creation.rate.per.minute?

Kind regards

Gerrit

Hi Gerrit,

Thanks for reaching out to us at the Developer Forum. I have been working with your CSM Nico. I am going to have him reach out to you as some of the assumptions around your rate limits are complicated by how fast your data action calls are going to return. Just because I rate limited is configurable does not mean it will automatically be raised. I have forwarded your inquiries to several of our product managers

I recommended to him based on what I was able to look at in your description that you might want to engage our Professional Services group because some of the data action call volumes are very high and might be mitigated through caching, data tables, etc .... There are some hard limits that can not be mitigated (3000 calls per minute across all of your Oauth client credential tokens. That can't be solved by cycling through different tokens. The 3000 calls per minute applies across all your OAuth client credential tokens and can not be raised)

Unfortunately to give you the best answer there would need to be a deeper analysis of what you are trying to accomplish as we have a lot of very big customers doing complicated flows that do not make the same number of GC API and web service calls as you are laying out here.

Thanks,
John Carnell
Director, Developer Engagement

Hi John,

thanks for the info.

The average execution duration of the DataActions is around 50ms.

With respect to caching, datatables I am busy with checking if there is some room for mitigation the high number of DataAction requests.
But right now I want to understand how Data Actions are exceuted in Flows. And how the number of tokens is controlled in Architect. For us Architect is a black box and we don't know when a token is created or re-used.
I could imagine that the rate limits for tokens is the bigger problem in the scenario presented.

I assume in case a DataActions executes against an external Endpoint the rate limit token.rate.per.minute does not apply or?

Kind regards

Gerrit

Hi Gerrit,

That is correct about the token.rate.per.minute rate limit for DataActiosn that execute against the external endpoint. You will still need to be aware of the maximum number of concurrent data actions you can have at a time.

Thanks,
John Carnell
Director, Developer Engagement

I recommend that you perform load testing to verify that you do not run into any sorts of rate limiting or performance problems:

During / after the load test you can make sure that your actions did not have any error / duration / rate limiting issues with the "Data Action Performance Summary View":

--Jason

Hi Jason,

sure performing a load test is a very useful way for finding problems with respect to rate limits.

However, in my point of view for designing the different Client Applications, Data Actions etc. the following points need to be checked:

  1. Number of interactions during peak times
  2. Understanding the rate limits and their influence to the design
  3. Create appropriate OAuth Clients (e.g. create several OAuth Clients for distributing the load)
  4. Perform a load test to verify that the design will work

Unfortunatly we are stuck with point 2.

Thus in my point of view it is not the time yet for doing a load test, besides that I am not sure how reliable such a test would be:

  • performance of the different Backoffice systems are not comparabale to the one in production.

  • the prod system is already live with around 200 active Users. Number of users will increase to 3500 next year. Therefore load testing is not possible in Prod environment.

How would you perform a load test? Is that something we should discuss with our CSM?

Kind regards

Gerrit

Engaging with your CSM to work through your limit questions/needs as well as load testing needs seems like the right approach to me.

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.