Genesys Cloud Developer Forum

Attribute Search Rate Limiting

I'm running into a couple of issues with rate limiting on the /api/v2/conversations/participants/attributes/search endpoint. First of all, it's limited to 10 calls. Secondly, rate limit error messages are misleading. They tell us to retry the query after X seconds, but even after waiting X+5 seconds we continue to get a fresh rate limit error. This appears to last for around a minute before we are allowed a fresh set of 10 calls before rate limiting again.

Sample body:
{"query":[{"type": "DATE_RANGE","fields": ["startTime"],"startValue": "2022-06-21T04:00:00Z","endValue": "2022-06-23T03:59:59Z"}]}

Both of these issues compound to make this endpoint non-performant for extracting batch intraday participant attribute data in order for us to provide same-day client SLA reporting. I sent a case to at least try to resolve the bad rate limit messages, but was redirected here. Who do I talk to in order to get this endpoint more performant or to look at other endpoints that accomplish the same objectives:

  1. Participant data
  2. Intraday (so not the Conversation Details Job endpoint)
  3. Batch (so nothing that only retrieves attributes one conversation at a time)
  4. Performant when scaled to thousands of conversations (so not limited after 10 calls and with a minute pause before the 11th/21st/31st... call)

That doesn't seem right. Can you share the case number? All bugs and investigation of customer-specific data (like what's triggering a specific rate limit event or why it's not working as expected) should go through Care.

Analytics Jobs are the preferred way to extract bulk conversations with participant attributes. But it sounds like you probably know that and are focused on the use case for getting that data before it's available via jobs. You're going to run into rate limits if you're making a lot of API requests; there's not really any way around that. If your scale is large enough that you can't make enough requests in a timely manner, you should look into real-time notifications so your service can keep track of conversations as they happen. I'd suggest the Event Bridge option for that type of integration.

You read my mind on the Analytics piece. Another disadvantage of the Analytics endpoints is a 1,024 character limit on participant attribute values. Notifications probably won't work for us as we're using SQL Server Integration Services to drive our ETL, and I can't imagine those two playing well together.


Thanks for the link to the case; I've escalated to Care management to ask that the case be reopened and engage the dev team responsible for this endpoint to continue the investigation into the unexpected rate limit.

It should work just fine, but it's not a plug-n-play solution. You will need to write something that consumes the notification events, processes them per your business logic, and integrates with SSIS to write the data you need when you need it. The only other options are to do what you're doing, which comes with restrictive rate limits because it's not meant for bulk operations, or drop your timeframe requirements to be able to use analytics jobs, which is meant for bulk operations.

I use SSIS for my ETL.
I actually run a web socket listener subscribed to all the queues I am interested in gathering attributes for in my SQL Server Job Agent as a C# executable that shut downs and restarts once a day

Originally it wrote to sequential files the merger job would then import but currently it's just directly funneling the data into staging tables the merger merges.

1 Like

All - please see my email reply to Ben below:

  1. Rate limiting appears to begin after 10 calls. This seems excessively low relative to other endpoints.

CB: We intentionally started out with a substantially small number of calls per minute based on the fact that we expected you to get back a significant amount more data in a single call as we are only pulling custom attributes not the rest of the conversation data. As it turns out, customers tend to pack more data in the attributes that we had expected and as such we are working to increase the CPM from 10 to 100 and then up to 300 to match the rest of our APIs. We hope to have this completed in 30 days.

  1. Rate limit error messages tell us to retry the query after X seconds, but even after waiting X+5 seconds we continue to get a fresh rate limit error. This appears to last for around a minute before we are allowed a fresh set of 10 calls before rate limiting again. Further feedback on this case suggests that the one-minute wait is correct, but that doesn’t match the API response saying it should be ready in a few seconds.

CB: This would make sense – retries should not be instantaneous but called after the minute has expired as our rates are per minute.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.