Queues and Metrics POST/GET API Real-Time data

Hello,

Thank you for all your help though this.

I need to collect Real-Time data for alerting on Queue and Agent metrics.
Admin in my company has asked me to specify APIs (so he can provide access to specific APIs i need) he said there are hundreds and he need exact API

I found this link analytics not sure of the set of POST or GET API I need to use!

Agent Metrics

On Queue Agents (Count)
Interacting Agents
Communicating Agents
Break Agents
Idle Agents (Count)
ASA (Time)
Not Responding Agents (Count)
Statuses
Alert-No Answer (Count)
Total Alert No-Answer (Count)
Answer (Count)
Alerted (Count)
Avg Alert (Time)
Avg Handle (Time)
Avg Hold (Time)
Avg ACW (Time)
Avg Talk (Time)
Total Talk (Time)
Total Handle (Time)
Total Hold (Time)

Queue Metrics

Waiting (Count)
Longest waiting (Time)
Interactions (Count)
Longest Interactions (Time)
Abandon (Count)
Average Abandon (Time)
Flow-Out (Count)
Avg Talk (Time)
Avg Wait (Time)
Offered (Count)
Answer (Count)
Hold (Count) (Time)
Avg Abandon (Time)
Avg ACW (Time)
Longest Interaction (Time)
Max Abandon (Time)
Max ACW (Time)
Max Talk (Time)
Max Wait (Time)
Max Answer (Time)
Voice mail
Total Talk (Time)
Total Handle (Time)
Total Hold (Time)
Min wait (Time)
Min Abandon (Time)
Min Handle (Time)
Min Hold (Time)
Min Talk (Time)
Min wait (Time)
Date Span

Thanks
Ashish

The closest you can get is probably going to be the User Status Aggregate Query . This will give you a count of presences and routing statuses. The Queue Observation Query will also be useful. It will give you a count of users on/off queue.

To identify the count of agents that have conversations in various states (e.g. connected, on hold, muted, acw, etc.), you'll need to pull the conversation details and aggregate the data manually.

I'm pretty sure that the following endpoints will work.

You may also need GET /api/v2/users to get a list of user IDs.

Thanks a lot @anon11147534!

One question on this thread.

What is the cached data time for each POST APIs, it is important to know in order to determine the interval for collection to avoid duplicate data being collected.

Observational data is expected to be accurate at the time it is requested. There is not a standard delay due to caching.

Thank you for your response @tim.smith .
Sure, not worried about delay, I meant to know exactly how long data is cached or stays in the cache.
Example: if we make an API call every 1 minute and data stays in the cache for 5 minutes before its dropped then we risk collecting same data 5x (times)

What cache are you referring to?

I may be wrong but I was concerned about "Web/App Caching".
If we are making a lot of frequent calls and expecting as-close-to-real-time-as-possible updates with polling ... another concern is app caching ... make a call, the API queries the backend data store for you... make the same call 30s later and instead of querying the backend for a new result, it grabs the same result from 30s ago from cache and returns that to you again.

Hi Askiwar,

Many of our services do use caching on the backend. However, when we do use a cache we also use a messaging-based eviction policy so if the system of record changes the data is automatically invalidated in the cache and the next read will read the record out of the underlying table and then update the cache. This means our cache records are always in sync with the underlying tables.

We almost never use a time-based eviction policy in the manner you are describing. The one thing I would be careful of is heavy polling on the observations endpoint. We do have rate-limiting of 300 calls per minute on an OAuth token so if you have high volumes of calls hitting this endpoint you can run into situations where you will get rate-limited on the token. So you have to make sure you keep under the 300 calls per minute on that token.

Thanks,
John Carnell
Manager, Developer Engagement

1 Like

Thank you John for the insight!
What is the limit if I use GENESYS_CLOUD_CLIENT_ID and GENESYS_CLOUD_CLIENT_SECRET

Hi Asikarwar,

Here is a link to our rate-limits surrounding OAuth tokens. Also I did a video series a couple of months ago around this topic here. Videos 6a goes into some background on our rate limits will videos 6b and 6c talk about some different strategies on retry logic.

I also cover the topic of rate-limits in retries in the following DevCast. I also have all of the example code from that DevCast here.

I know thats probably way more information then what you were expecting, but one of the common challenges Genesys Cloud devs run into is not understanding how our rate-limiting works. By the time they find out, their application is in production and has an issue :).

Thanks,
John Carnell
Manager, Developer Engagement

1 Like

Wow John that is superb set of videos and documents.

I will mostly be writing scripts in Python.
Let me state to make sure I understood.

If we make 1 call to * POST /api/v2/analytics/queues/observations/query every 10 seconds we are well under 300 calls /minute rate.

But if I am fetching Number of On-Queue Agents 1 call every 5 seconds which requires two APIs

  1. GET /api/v2/routing/queues
  2. POST /api/v2/analytics/queues/observations/query

Then combined, I have exceeded the 300 calls/minute rate, right?.

Thanks
Ashish

1 Like

Our team who is responsible for Supporting internal teams for Genesys related queries asked me to build alerting system (they already use out of the box alerting provided by Genesys Pure Cloud, but there were some limitations.)

We already have a monitoring platform which I can leverage to help them. All I need to do is collect metrics and trigger alerts.
Not as easy as it sounds.
I was told as they have some metrics where i need to check data every 5 to 10 seconds, which will definitely exceed the rate limit.
I was wondering if using web-sockets or notification services will help?

Hi Askirarwar,

That is correct about 1 call every 5 seconds with 2 API calls exceeding your rate limits. So let me point a couple of things that might help.

  1. Queues. In many organizations, queue information does not change on a regular basis. (E.g. You are normally do not constantly teardown queues on a regular basis). So in the use case your describing you could probably cache the results of the queue from the GET /api/v2/routing/queues so that you read the data out of a cache and then expire the data once a day (or whatever is appropriate). I did this in the Java Spring Boot example I sent you using the Caffeine library and it significantly reduces the number of calls to the API.

I am not a Python expert (I have coded in Python enough to be dangerous) but there are a number of Cache libraries available in Python that could do the same thing. One example of this would be
here.

  1. Queue Metrics. The real question you have to ask yourself is what level of lag are you willing "freshness" do you want in your queue observation metrics data. Most people think they need near or real-time data. However, I have worked in call centers and the reality is that most business decisions based on anywhere from 1 to 5 minutes is more than enough. If that is the case, you could again leverage a cache with a 1-5 minute expiration date.

If you truly need the most up to queue data, you can use our Notification service. Our notification service allows you to open a WebSocket connection and receive "change" updates directly from Genesys Cloud. It is how our UI receives real-time updates.

There are a couple of good places to learn about the notification service:

  1. Notification Service Documentation
  2. Notification Developer Tool. This is a web-based tool that lets you subscribe to notifications and see the data coming in from your subscription.
  3. Python tutorial using Notification API
  4. CLI example of how to listen to the notification service

Hope that helps.

Thanks,
John Carnell
Manager, Developer Engagement

You are superb John!

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.