Creating a realtime reporting tool for Genesys flows

I wrote a blog article about a tool I created for work to allow us to track custom metrics in our Architect flows using the Event Bridge Integration and Participant Attributes.

It only costs ~$40 a month and allows us (in near-realtime) to alert on issues, do anomaly detection and track customer behaviour - all in the same reporting platform we use for the backend services our flows depend on.

I'd love to hear people's thoughts on my approach and any improvements/considerations.

4 Likes

Hi Lucas,

This is very interesting. Could you say a little more about how you're using these metrics?

So for example, you're setting this metric when the customer's identity isn't known (e.g. not in your CRM)?

CustomerRoutingMetric_Unknown(2023-03-05T08:18:04.568Z)

What are you doing to then shape your conclusion? I guess if this value is normally set 20 times per day and then today it's at 500, something is wrong with your API call to CRM but...

Just a little more on the types of metrics you are setting and what that information is allowing you to do would be very interesting.

We use the metrics for both the run of the mill questions:

  • How are people responding to a question (with quick responses) and what is the impact of changing the wording?

  • How many people who had their intent recognised, was served information but still responded to our follow up question with quick response X?

  • What is the breakdown of successful/failed X self-serve journeys across WM, Voice, WhatsApp etc?

  • How many people entered X (Acc # etc) successfully first/second/third time, and what is the impact be of changing the wording?

  • We're seeing random failures, track how many times DataAction X responds with X over the next few days to diagnose whether it is this

  • etc

And leverage them using DataDog to:

  • Monitor the amount of successful/failed submissions of X and alert me (via phone/SMS/Slack) if the number unexpectedly changes

  • If flow metric X increases then what were X metrics/logs of the backend service it depended on at that time?

    • We are building ever more complex journeys/routing that depend on more services across the business, so this type of correlation becomes more and more useful
  • During a live incident create a document (DD Notebook) to curate your investigations alongside up to date metrics

    • This was useful when looking at, and providing updates on a recent incident where customers were submitting X across different platforms (Genesys being just one) and we wanted one place to show the sudden increase in submissions and their respective success/failure rates
  • X team rely on this metric (see below) so alert me if this metric drops to zero, indicating it may have been accidentally removed

  • etc

We also extract a lot of conversational data (inc these metric Participant Attributes) into a Data Warehouse that contains data for other systems. Meaning Data Analysts are able to join conversations by metric(s) X to data from other systems to get a much richer understanding of a customers journey through all the various systems. I'm no Data Analyst but some questions I've seen asked using this data are:

  • If a customer did X or answered X in a flow/bot, and where transferred to an agent then how long did this take the agent to resolve

  • For customers who went through flow/bot X how many resulted in a 'case' being raised in X, could we update the flow/bot to reduce this

  • etc

Sorry that response was quite wordy.

1 Like

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.