AWS S3 recording bulk export by division

Dear Support Team, I'm in the process of adding the AWS S3 recording bulk actions integration in order to export recordings to a AWS S3 bucket. As a Prerequisite, I'm completing the Create IAM resources for AWS S3 bucket procedure (Create IAM resources for AWS S3 bucket - Genesys Cloud Resource Center), and for this I have a few questions, if you could help me answering them, I will deeply appreciate it:

  1. In step 6. d. could you please confirm which Genesys Cloud's production account ID number should I use, 765628985471 or 325654371633? We log in at login.mypurecloud.com, and the region is Americas (US East).

  2. On the procedure: Add the AWS S3 recording bulk actions integration (Add the AWS S3 recording bulk actions integration - Genesys Cloud Resource Center), note 4 from the top down states "Genesys Cloud exports recordings one time. If you create an export that matches a recording that has already been exported, that recording will be skipped.", does this apply only to one specific bucket or to different buckets? For example, could I export recordings from January 2023 to mybucket1 and also to mybucket2?

  3. Regarding the API procedure to execute the export, can I specify folders in the bucket name on the configuration of the integration so that I could organize files by division? Or, only bucket names are allowed? In which case I would have to create a bucket for each division?

  4. Lastly, in the code for the "ConversationQuery", to create the recording bulk job (https://developer.genesys.cloud/analyticsdatamanagement/recording/recordings-bulk-action), there are no filters to select the convesations from a specific division:
    function createRecordingBulkJob(){ return recordingApi.postRecordingJobs({ action: 'EXPORT', // set to "EXPORT" for export action actionDate: '2029-01-01T00:00:00.000Z', integrationId: '-- integration id here --', // Only required when action is EXPORT includeScreenRecordings: true, conversationQuery: { interval: '2019-01-01T00:00:00.000Z/2019-06-11T00:00:00.000Z', order: 'asc', orderBy: 'conversationStart' } }) }
    ... can this be done? If so, what would be the code to included a filter by division id? For example, to query the conversations that correspond to division.id = 21a01f72-d1bd-4efd-a163-64e7146b8349.

If these questions have already been answered, I would very much appreciate it if you could point me to the posts.

Many thanks in advance for your kind support!! Regards.

For the Account ID in 6.d, most customer will use 765628985471. Only US government customers whose org's are hosting in the FedRAMP region would use the other one.

For question #2, that is most likely tied to the AWS S3 integration. So you if you have 2 of those integrations configured that are exporting to different buckets, then they operate independently. But once you export the same recording to both of them, you can't do it again.

For #3, you can only request recordings to be exported to whatever the default bucket location is. You can then use a lambda in the AWS account that can fire for every file added that can move that file to whatever location you'd like.

For #4, you'd have to add a conversationFilter that matches the Division ID you are looking for:

{
  "conversationQuery": {
    "conversationFilters": [
      {
        "clauses": [
          {
            "predicates": [
              {
                "type": "property",
                "dimension": "divisionId",
                "operator": "matches",
                "value": "<Some-division-GUID>"
              }
            ]
          }
        ]
      }
    ]
  }
}

Many thanks Jim!! I will try to filter the query as per your indication. Regards!!

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.