AI Voice Analytics are now available in the QM interface.
On the back office, you can specify which analysis the customer has the right to run, among the following tasks: Satisfaction Estimation, Call Reason Extraction, Actions Extractions, Summary, Call Sequencing.
Whenever an AI analysis is available for a call, it will be displayed next to the forms using a toggle, where you can toggle between the grid view & the AI Analysis view.
On SFTP & API calls, you can now specify the analysis to be run. These analysis will run only if the permissions are set in the back office. QM block should be updated later this year to benefit from these capabilities.
Multilingual support is available, meaning you can have transcription in a certain language with analysis in a different language.
Keep in mind, Satisfaction Analysis is enabled by default on all customers, with no current way of removing this.
You can checkout the video demonstration there (FR conversation, ES analysis, EN interface): Link to the demo
You can now set a conformity level for each option of a criteria. When creating a criterion, every option will have a dropdown list where you can choose between "Conform", "Partially Conform" and "Non Conform" options.
All criterias can have an unlimited amount of "Conform" "Partially Conform" and "Non Conform" options.
The conformities are displayed on the evaluation screen next to the scores.
The statistics for conformities are going to be released later in October.
All options for already created criteria are considered conform by default.
Added quick filter with tags
By clicking on a tag in the interaction list, you will filter the list and display only the conversations that have that specific tag you just clicked.
Rework of the criteria view
The criteria view now only displays the fields that are linked to the type of criterion you are viewing.
Fixes
We fixed an issue where the background menu would not resize properly on smaller screens
We fixed an issue where setting 0 as a value for a trigger condition would not save properly
It is now possible to add metadata in AI APIs, with custom data relevant to the customer request and context, that is sent in API request payload and is then retrieved in API response content.
This feature will allow project managers and customers, that are using AI APIs, to link the performed AI analysis with :
CCaaS relevant parameters : call id or mail id, thread id, queue id or queue name, agent id, agent name, ...
The possibility to add metadata is available for :
Private APIs : APIs used for non native integration with Diabolocom CCaaS (with X-DBLC-PHEDONE-KEY header) Endpoints : api/webhooks/diabolocom/job/tasks (for audio-based AI analysis) api/webhooks/diabolocom/job/json-tasks (for text-based AI analysis)
Public APIs : Customer facing APIs linked to an AI account (using Bearer token generated from AI solution) Endpoints : api/job/tasks (for audio-based AI analysis) api/job/text-tasks (for text-based AI analysis)
Request format
The metadata can be added using the objectmeta that has stringkey:value pairs.
⚠️ meta keys are automatically converted to Lower Case format (e.g. Diabolocom_Id is converted to diabolocom_id). It is recommended to add keys in Lower Case format, to have identical meta keys for both input and output payloads (content of meta object).
Example 1 : Case of audio-based AI analysis The request content type is form-data meta[dblc-account-id]: "1023832" meta[dblc-call-id]: "98483" meta[dblc-agent-id]: "1234" meta[dblc-agent-name]: "Test Agent"
Example 2 : Case of text-based AI analysis The request content type is JSON "meta": {"mail-id": "1024", "contact-session-id": "5db2c7cc-64d5-4b0a-be1f-453e9b47bff6", "thread-id": "758"}
Response format
The job status API response body is in JSON format. Metadata is contained in the object meta that has stringkey:value pairs.
Screenshots
Private API (INTERNAL USE ONLY BY PROJECT MANAGERS FOR NON NATIVE INTEGRATION WITH DIABOLOCOM CCAAS)
api/webhooks/diabolocom/job/tasks (for audio-based AI analysis)
Sample request body using Private API for audio-based analysis
Sample result body using Private API for audio-based analysis
api/webhooks/diabolocom/job/json-tasks (for text-based AI analysis)
Sample request body using Private API for text-based analysis Sample result body using Private API for text-based analysis
Public API (CUSTOMER USE)
api/job/tasks (for audio-based AI analysis)
Sample request body using Public API for audio-based analysis Sample result body using Public API for audio-based analysis
api/job/text-tasks (for text-based AI analysis)
Sample request body using Public API for text-based analysis Sample result body using Public API for text-based analysis
Mail Tags extraction AI model - Private API request example Mail Scenario Setup example to use Mail Tags extraction AI model API and add thread tags dynamically Mail interaction example with tags extracted by AI model (Mailbox view)
You can now send recordings to the QM using a SFTP server on our side. On every channel you can create any number of configuration which will create SFTP servers on our side where the clients will be able to push audios to.
Configuration
A feature toggle was created to allow access to that functionality per customer.
Creating a configuration will create a SFTP server with an IP and host that you can use as credentials to connect.
You can specify a text pattern to look for so that all fields are automatically mapped and there is no need for clients to change their files.
You have access to the SFTP logs in the SFTP configuration screen.
Authentication is secured by SSH Key.
Added additional AI Insights
It is now possible to run additional AI analysis using a "tasks" field in the payload or in SFTP configurations, but this is not displayed on the front end yet. Implementation will be done at a later date.
You can now check when actions were triggered and their entire trigger history. Trigger history also contain status code and response on click.
New filters
It is now possible to filter by evaluation form on the interactions list.
It is now possible to filter by criteria type in criteria list.
Fixes
Fixed an issue where having two options having the same score would make manual modification impossible.
Fixed an issue where deleting a report from a dashboard would necessitate to save in order to actually update the dashboard.
Technical/Backend
Engage provided audio are not stored/encrypted by QM & instead we leverage native private API on demand
Code for SFTP functionality and additionnal AI Insights was deployed in the backoffice, but please do not activate it until the next release, it will not work as the infrastructure for production is still in deployment.
You can create eval grid-level triggers to fire actions when certain conditions are met
Supported conditions are : grid score, audio duration, criteria options and you can compose multiple of them
Supported actions are limited to calling to a webhook with a payload, with the plan to add more in the future
To configure, create an action in the Actions menu, then go to an Eval form and create a trigger with your action added to it.
Audio encryption
We deployed the encryption of audio in the system so that the audio stored on our systems are now fully encrypted. It adds a small load time (about half a second to a second) but will ensure more security.
The Analytics solution used for AI Solution is Fathom Analytics.
Analytics events are implemented on AI Solution, that are triggered by a list of actions the user can perform on the app (frontend events) : login, signup, view of models, plans, API doc, ...
More events will be implemented, that will be triggered by Analysis execution (backend events).
After implementing Audio encryption feature, we introduce a new parameter to
transcription and transcript-to-ai-insight enabled endpoints to request for
audio to be stored long term.
The new parameter is keep_audio_after_analysis and it has two possibles values : "0" (by default) or "1".
If set to "1" (true), the audio file won't be deleted right after transcription, it will stay securely stored and might be accessed on demand and consumed in our advanced transcription web components or via API.
Example of parameter impact and audio access from Call library
When the parameter is set to "0" or not used, its default value is set to "0", then after transcription, the audio is not stored and, thus cannot be fetched in Call library interface.
Sample API request transcript-to-ai-insight without parameter keep_audio_after_analysis Call library interface - no audio stored to be loaded (no audio player)
When the parameter is set to "1", then after transcription, the audio is stored and, thus can be fetched in Call library interface.
Sample API request transcript-to-ai-insight without parameter keep_audio_after_analysis = "1"
Call library interface - audio stored and played