Upload CSV almost but not quite

Hi,

So I'm triggering and import job to get a URI - that seems to be working fine. Returning a url to post to.

Running that from a function library (I'll post it below)

However when trying to upload the CSV, while I'm getting a 200 message - it doesn't like my csv it appears.
I'm trying to append data to an existing data table.

This is a simple 2 column table - with phone (expressed as tel:+......) and inin-outbound-id

here's the first couple of rows.. these numbers have been randomised.

cat ../tempfiles/outboundlists/outboundNums.csv
phone,inin-outbound-id
tel:+61428462115,9067161e609325abc7895de2ada21506
tel:+61428461645,5f395e0f730453021ff176996b20fb91
tel:+61428461897,6df7771114358d70455a6760c0a37ac1
tel:+61428461778,91f38ff0f017139c2011565b032dc6af

Here's my upload code

url=uploadURI
#hed = get_token(role)
try:
    headers=get_token(role) # this returns a full header line from a function I built 
    files={'file':open(fileName, 'rb')} *#fileName refers to where I saved the file  (also tried files={'file':(fileName, open(fileName, 'rb'),'text/csv')})* 
    #headers = {'Authorization': 'Bearer '+auth_token}
    r = requests.post(url, files=files, headers=headers, verify=True)
    print(r.status_code)
    print(r.headers)
    print(r.content);
except Exception as er:
    print('exception')
    traceback.print_exc()

Here's the response from the status request. So looks like I'm sending the right headers - but something missing ---- BTW - the csv was created from a dataframe - so if I can avoid exporting to csv that would be awesome.

{
"id": "14619592-f5d1-44b1-b0ba-5fb9c3f3cd33",
"owner": {
"id": "9f73e8dd-2030-449e-80c5-ff0b1e8584c2",
"selfUri": "/api/v2/users/9f73e8dd-2030-449e-80c5-ff0b1e8584c2"
},
"status": "Succeeded",
"dateCreated": "2023-03-10T06:46:35Z",
"uploadURI": "https://apps.mypurecloud.com.au/uploads/v2/datatables?additionalInfo= [cut out the rest from here]
"importMode": "Append",
"errorInformation": {
"message": "Success",
"code": "SUCCESS",
"status": 200,
"messageWithParams": "Success",
"details": [],
"errors": [
{
"message": "Import failure at item 1 with key "?"",
"code": "FLOWS_DATATABLES_IMPORT_FAILURE",
"status": 400,
"messageWithParams": "Import failure at item {itemNum} with key "{key}"",
"messageParams": {
"itemNum": "1",
"key": "?"
},
"details": [],
"errors": []
},

and the trigger call

def triggerDataTableImport(role,tableID,mode):
    api_token=get_token_sdk(role)
    api_instance = PureCloudPlatformClientV2.ArchitectApi(api_token)
    if mode=="ReplaceAll":
        body = ({"importMode" : "ReplaceAll"})
    else:
        body = ({"importMode" : "Append"})

    
    try: 
        api_response = api_instance.post_flows_datatable_import_jobs(tableID,body)
        return api_response.upload_uri ,api_response.id
    except ApiException as e:
        print("Exception when calling PostFlowsDatatableImportJobsRequest->post_flows_datatable_import_jobs: %s\n" % e)

BTW - I know there is the method of inserting row by row - but this job will run multiple times a day and may have quite a lot of data - so the preference is clearly to upload with a csv..

FWIW - here is how I generate the headers.

#Authentication functions
import PureCloudPlatformClientV2 
from PureCloudPlatformClientV2.rest import ApiException
import base64, requests, configparser 
import sys,os


connconf_obj = configparser.ConfigParser()
connconf_obj.read("configs/connection.cfg")

def get_token(role='devReportingRed'):
	role='devReportingRed' if role is None else role
	ENVIRONMENT = 'mypurecloud.com.au' # eg. mypurecloud.com
	#connconf_obj = configparser.ConfigParser()
	#connconf_obj.read('connection.cfg')
	env_params = connconf_obj[role]
	roleFunction = env_params["function"]
	#print(f"Role function is {roleFunction}")
	CLIENT_ID = env_params["id"]
	CLIENT_SECRET = env_params["secret"]
	if not CLIENT_ID:
		#print ('Role specified not found using reporting read only from dev')
		CLIENT_ID = '**************'
		CLIENT_SECRET = '******************'
	authorization = base64.b64encode(bytes(CLIENT_ID + ":" + CLIENT_SECRET, "ISO-8859-1")).decode("ascii")
	request_headers = {
	"Authorization": f"Basic {authorization}",
	"Content-Type": "application/x-www-form-urlencoded"
				}
	request_body = {
        "grant_type": "client_credentials"
		}
	response = requests.post(f"https://login.{ENVIRONMENT}/oauth/token", data=request_body, headers=request_headers)
	if response.status_code == 200:
		#print("Processing Request")
		pass
	else:
		print(f"Failure: { str(response.status_code) } - { response.reason }")
		sys.exit(response.status_code)
	response_json = response.json()
	requestHeaders = {
        "Authorization": f"{ response_json['token_type'] } { response_json['access_token']}"
    }
	return requestHeaders

Hello,

Your csv file (at least the column names) doesn't match what you defined for your Architect Datatable.

A Datatable is defined with a Reference Key (attribute/field name = "key")
During the creation/configuration of the Datatable, it is asked to enter/type a "Reference Key Label".
And you can then define Custom Fields.

Let's say I define a table with Reference Key Label = "inin-outbound-id"
Using this field in my example to be sure it is unique.
And then define a custom field with name "phone".
The Datatable row contains 2 attributes: "key" (which contains the inin-outbound-id value - but the column name is still "key"), and "phone" (the name of the custom field).
My csv would then reference key and phone as the column names:
phone,key
tel:+61428462115,9067161e609325abc7895de2ada21506
tel:+61428461645,5f395e0f730453021ff176996b20fb91
tel:+61428461897,6df7771114358d70455a6760c0a37ac1
tel:+61428461778,91f38ff0f017139c2011565b032dc6af

Regards,

1 Like

Thanks @Jerome.Saint-Marc

Had the column name wrong.. of course !!!

Thanks - my head was jelly by this stage..

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.