Clone data from Genesys relationally (without returning JSON)

Hi all,

Genesys holds a massive set of data for our organization's interactions and all related tables (as we know by the amount of endpoints in the API explorer), but some specific questions about our data we want to answer are difficult to do in the built-in report on the pure cloud UI.

Is there a graceful way to clone some or all of our data that is stored on Genesys to a local SQL database so we can make more specific queries. I know this may be a stretch on dev questions, but I've been so far trying to do this manually by fetching from the query endpoints on a given interval, and flattening the JSON into something of a table form, which is a really bad way of doing this.

Any ideas are appreciated!

Nope.
you're building or buying an ETL solution to do it.
The most help you'll get is this handy out of date blueprint

Our Insights Integration follow the same process

Hi bsami,

Maciej here from the Genesys product team.

Is your use case mainly focused on conversation data or you are interested in extracting other data points / entities?

For conversation data, the API path is a pretty common one (Eos_Rios's blueprint covers that nicely), there is also the event bridge path if you are looking for more real-time data (that is also flattened).

We are currently working through design phase of a solution that will enable more scalable, native data extraction mechanism (flat, near real-time, historical bulk + incremental daily exports).

I'd love to learn more about what you have currently built and your use case.

Thanks,
Maciej

Hi Maciej,

For added context, we're looking to expand our insights and reporting with regards to our interactions and their related data. For example, one of the insights we most recently looked into involved how many of a certain evaluation question was answered truthy in the last 90 days. In order to get that, though, we needed to extract all conversations with an associated evaluation ID in the last 90 days, then pull the user data for agent participants separately as well as the evaluation scores and evaluation questions from different endpoints based on their own IDs.

The challenge overall is the fact that we're trying to pull data from a relational data structure via API calls which return them in unstructured JSON. Then based on each specific data point with an associated ID, we need to make another call. So we ended up making 4-5 different API calls per conversation just to get specific evaluation information.

Our team is looking to answer more business questions using the data, so we considered the option of cloning recent data into a relational database locally instead. My current solution has been to just pull all the records for the last 90 days for each endpoint separately, flatten the data, and save them into their own database table. This way at least the relations are maintained via the associated IDs. If we have a table of all call interactions and we want user data, it is all available via the user table and user ID.

Those are the use cases and our solution, which is why we thought maybe cloning the whole database would make querying our data easier. Hope this answered your question!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.