Importing a new schedule using python requests

Hello,

I'm trying to import new schedules on the platform using python's requests module to make API calls using a Client Credentials grant. I've tried two approches :

  • Using the import workflow :
    making a post request api/v2/workforcemanagement/businessunits/{bu_id}/weeks/{weekId}/schedules/import/uploadurl
    then making a put request to the upload url and finally making post request to api/v2/workforcemanagement/businessunits/{bu_id}/weeks/{weekId}/schedules/import

  • Using the schedule update workflow :
    First generating a blank schedule by making a post request to /api/v2/workforcemanagement/businessunits/{bu_id}/weeks/{weekId}/schedules
    Then a post request to /api/v2/workforcemanagement/businessunits/{bu_id}/weeks/{weekId}/schedules/{new_schedule_id}/update/uploadurl
    a put request with the planning to the upload url and finally a post request to /api/v2/workforcemanagement/businessunits/{bu_id}/weeks/{weekId}/schedules/{new_schedule_id}/update

I actually only get 200 responses and I manage to create a blank schedule, however when making the final request to process the uploaded schedule, no schedule is imported or is updated depending on the workflow. Furthermore when I do a get request on api v2 contentmanagement status no processes are listed. Has anyone encountered a similar problem knows how to solve it ?

Here is a detail of my requests on the schedule update workflow :

Blank request schedule :
requestHeaders = {
"Content-Type": "application/json",
"Authorization": f"{ token_type } { access_token }"
}

weekId = f"2022-05-16"

blank_schedule_json={
"description": PLANNING_DESCRIPTION,
"weekCount": 1
}

response = requests.post(f"api.{ENVIRONMENT}/api/v2/workforcemanagement/businessunits/{bu_id}/weeks/{weekId}/schedules", headers=requestHeaders, json=blank_schedule_json)

Status code : 201

Update upload url :

requestHeaders = {
"Content-Type": "application/json",
"Authorization": f"{ token_type } { access_token }"
}

schedule_size_json={
"contentLengthBytes": byte_size
}
response = requests.post(f"api.{ENVIRONMENT}/api/v2/workforcemanagement/businessunits/{bu_id}/weeks/{weekId}/schedules/{new_schedule_id}/update/uploadurl", headers=requestHeaders, json=schedule_size_json)

response : {'uploadKey': 'userUploads/schedule/import/xxx.input.json.gz', 'url': 'https://fileupload.mypurecloud.de/userUploads/schedule/import/...', 'headers': {'Content-Encoding': 'gzip', 'Content-Type': 'application/json', 'x-amz-tagging': 'organizationId=xxx-xxx-xxx-xxx&originPlatform=PureCloud&role=darth&owner=Dev-CloudAppsDarth@genesys.com'}}

status code : 201

Schedule file upload :

test_dict = {
"metadata":{
"version":1
},
"agentSchedules":[
{
"userId":"xxx-xxx-xxx-xxx-xxx",
"shifts":[
{
"activities":[
{
"activityCodeId":"xxx- xxx-xxx-xxx-xxx",
"startDate":"2022-05-16T12:00:00.000Z",
"lengthMinutes":120,
"description":"",
"paid":True
}
],
"manuallyEdited":True
}
],
"fullDayTimeOffMarkers":[
],
"metadata":{
"version":1
}
}
]
}

jsonfilename = 'data/plannings/test.json.gz'

json_str = json.dumps(test_dict) + "\n"
json_bytes = json_str.encode('utf-8')

with gzip.open(jsonfilename, 'w') as fout:
fout.write(json_bytes)

byte_size = os.path.getsize(jsonfilename)

requestHeaders = {
'Content-Encoding': 'gzip',
'Content-Type': 'application/json',
'x-amz-tagging': 'organizationId=xxx-xxx-xxx-xxx&originPlatform=PureCloud&role=darth&owner=Dev-CloudAppsDarth@genesys.com',
'Content-Type': "application/json"
}

response = requests.put(upload_url, headers=requestHeaders, data=open(jsonfilename, 'rb'))

Status code : 200 - OK

response : {'Content-Length': '0', 'Connection': 'keep-alive', 'Date': 'Tue, 26 Apr 2022 14:32:17 GMT', 'x-amz-expiration': 'expiry-date="Wed, 04 May 2022 00:00:00 GMT", rule-id="wfmService-ScheduleImport"', 'x-amz-server-side-encryption': 'AES256', 'ETag': '"31fb352482b6515ceb253b4197c3e713"', 'Server': 'AmazonS3', 'X-Cache': 'Miss from cloudfront', 'Via': '1.1 d32d70ba49809b2292cca689969507a0.cloudfront.net (CloudFront)', 'X-Amz-Cf-Pop': 'LHR50-P1', 'X-Amz-Cf-Id': 'bPgpD3PaGM3Rnao1rp7dw3PBKwKASxooQtmIIN3N-MvhOH5Oelv40A=='}

Update processing call :

requestHeaders = {
"Content-Type": "application/json",
"Authorization": f"{ token_type } { access_token }"
}

update_json = {
"uploadKey": upload_id,
}

response = requests.post(f"api.{ENVIRONMENT}/api/v2/workforcemanagement/businessunits/{bu_id}/weeks/{weekId}/schedules/{new_schedule_id}/update", headers=requestHeaders, json=update_json)

I hope I made an obvious mistake that can be fixed easily, or someone encountered a similar problem !

Best regards,

Maxime

Hi Maxime -

Looking at your process it seems like you're doing things mostly correctly, nothing stands out as being grossly wrong.
A few things:

Furthermore when I do a get request on api v2 contentmanagement status no processes are listed

I don't know what this route does. WFM schedule processes are not managed by content management at all

I noticed in your payload JSON file you have "paid":True. I don't know if this will solve the problem but the back end is running on the jvm which typically expects boolean values in lower case, so you might try switching to "paid":true and see if that helps

A couple things that might help:
Schedule import/update status (and any errors associated with the process are returned) is communicated by websocket notifications. You should subscribe to the topic before you make the call to import/update your schedule as for small import/update jobs the notification could be sent almost instantly so you don't want to create a race condition.

The topic you'll want is v2.workforcemanagement.businessunits.{businessUnitId}.schedules. The response you get from the /update or /import routes will include an operation ID - when you get a notification with a matching operation ID, if there was a problem with your json payload that should include details.

Finally, if none of the above helps, you can reach out to support with a correlationId from the final /upload or /import call so we can investigate more thoroughly

Thank very much for your answer and insight. I solved my problem, it was caused by a wrong byte size estimate for the upload. I'll make a comment detailing the solution ASAP.

1 Like

Very glad I was able to help!

I managed to solved the issue. The problem was coming from the byte size of the .json.gz file to put when generating the upload url. Instead of using the os.path.getsize method you should get the byte size from compressed json bytes as follows :

compressed_data = zlib.compress(json_bytes, 9)
byte_size = sys.getsizeof(compressed_data)

Hope this can help someone in the future !

2 Likes

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.