Hi,
I think I found an issue with the Lex Integration when it is used in a Webchat callflow.
Basically, the Bot Input text is being sent to the bot instead of being sent to the guest in the chat.
The workaround I found is to put a Send Response before calling the bot, and putting "" in Bot Input Text.
The purpose of the "Bot Input Text" is to pass text to the Lex Bot when it is invoked/triggered from Architect.
It is there if you need to pass anything to Lex (and leverage it on lex/bot side to trigger a specific logic).
Then, after the Bot is invoked, it will join the chat session and be able to exchange messages with the customer (receive the next messages from the customer, send messages to the customer).
See here: https://help.mypurecloud.com/articles/call-lex-bot-action/
Under "Call Lex Bot action in inbound chat flows"
"Bot Input Text: Enter the text that you want to send to the Lex bot, not the chat. Generally, the only time you set this field is when you chain Lex Bot actions.
Note: If you want to send a greeting to the chat, add a Send Response action to the State or Task just above the Call Lex Bot action. Do not use Bot Input Text to send greeting text."
Hi Jerome, I think something is wrong here in the implementation and the documentation. Even the Voice purecloud implementation has a field "starting audio" sent to the caller and there is nothing to actually send data to the bot, which is the correct way to do it.
Similarly, Amazon connect has a field to play a prompt or TTS to the caller/chat guest as a trigger for the caller to say something to the bot.There is no notion of chaining in there.
It works either way but it is really counter-intuitive and I won't be the last to have concerns with this.
Type a message that provides callers with information about what they can do. For example, use a message that matches the intents used in the bot, such as “To check your account balance, press or say 1. To speak to an agent, press or say 2.”
"Voice and chat mode are opposite in their starting mode - voice starts by the bot saying something and chat starts by waiting for the customer to say something. If a customer wants the chat bot to open with a greeting, then he should put that greeting into a Send Response action prior to the LexBot action, and he'll get the behavior he expects. The main reason this is done is that often chat bots are chained together and/or used in place that aren't the start of the flow. It would be bad for the web person to be waist-deep into a chat with one bot and for a second bot to say "Hello, welcome to Acme!" in the middle of a conversation. Hence, you can put the greeting at the beginning of your chat flow with a Send Response action if desired."
What chaining Lex bot actions means here is that you can trigger Lex at different stages of your Architect flow, AND multiple times.
You could invoke a Lex Bot that is in charge of a specific task. Once the bot has finished the processing of that, it will return the intent/result back to the Architect flow, where you can decide to do something like transferring to an agent, or invoke a Lex Bot (a different one, or the same one with some input "parameters" via the Bot Input Text).
It is true that you can also prefer to have a single bot that does everything (so the bot is invoked only once from the Architect flow).
If you want to suggest different management of this, there is still the possibility to submit an idea in the Genesys Cloud Ideas (Ideas Lab): https://purecloud.ideas.aha.io/ideas
Hi Jerome, I'm now trying to get the lex bot working in messaging. The behaviour in Chat is not ideal but at least I can get it to works.
In messaging, If I put an empty string in Bot input text, I get a No Match. I tried also NOT_SET and Message.Message.Body
NOT_SET seems to transfer to an agent without me being able to see what path it is taking.
Message.Message.Body is also returning a no match.
The question is, how do I make the response the customer types to my previous SendResponse the input to the bot.
Thank you
As I said last time, I haven't used/created Lex bots. But I just tried this morning with Chat & SMS contacting a "template coffee bot".
I think that in messaging, you have to pass a value as Input Text to the bot. There is no default value/unset. I don't know the underlying of Lex and the integration, but I wouldn't be surprised if it comes from the fact that Messaging by its nature is session less (I mean not like a Chat which has a notion of session in the protocol itself).
So if you pass NOT_SET in the Input Text, it is understood as undefined (I mean the undefined in javascript as an example - not a string - just a primitive type undefined).
The Bot Input Text will be considered by the bot as the first message that is received in the bot flow.
In my case, I tried with "hello" or another valid word and that triggered the Coffee bot logic. If you send something which is not understood by the bot, the bot will fail with no match (I mean Lex side). If I send "" (empty string as Expression) or if I send "abcd" or a word in french, the Lex Bot (defined for english) failed on its side (kind of unrecognized word failure).
Regarding the Message.Message.body (set as Expression - no quotes - lower case for the b of body), it corresponds to the first message sent by the customer (the one that triggered the conversation and the Architect message flow). Not the last message received. There is no collect of input in the Architect Flow for chat/message. A bot would have to capture next messages from the customer and send them back in the result/output of the bot if you wanted to capture something that is sent after the first initial message.
In my case, I sent "hello" as first message or "order coffee" (I mean anything valid from english Lex standpoint) - in my Architect flow, I had a SendResponse just to say "welcome..." and then a call to Lex Bot passing Message.Message.body as Input text.
If that corresponds to what you did already (but getting a failure), it might be worth checking the Message.Message.bodyType. I could only access an environment with SMS messaging (and in this case, I know the bodyType is text). If you are using another messaging platform, as I haven't used it myself, I don't know if it could send bodyType as rtf or as html.
One thing you can do to help you troubleshoot (if you haven't done it already) is to call a Set Participant Data block in your flow to "attach" these 2 values. I mean 2 attributes with a value (set as Expression) with Message.Message.body and Message.Message.bodyType. In my case, I then use the developer tools/API Explorer and execute a query to get the conversations of the logged user (after the chat/sms is connected to my agent/user). This way I can see what values were attached (to avoid creating a Script for this).
Thanks Jerome, sorry for the late answer, I got assigned another project and had to put this on the back burner. I will look at it more deeply as soon as possible, but there are some interesting leads in there that I still need to explore.