Responsible AI – What is it, and why it’s important to CX?

Responsible AI – What is it, and why it’s important to CX?

Last week I attended a stimulating webinar discussion on the subject of responsible AI, delivered by Pegasystems.

Over the last few years, AI in the form of machine learning (ML) and natural language processing (NLP) has been one of CX-tech’s most ubiquitous developments. It crops up almost everywhere. Product recommendations in marketing and eCommerce systems, sentiment analysis to detect a customer’s mood, pipeline management to help the sales force focus on opportunities that are most likely to close. These relatively benign examples are little cause for concern, and when well-executed, generate an uplift in sales and faster resolution of customer queries. All good so far.

However, there is a potentially darker side to AI and how customers and employees perceive it. There are also substantial risks associated with AI, especially in highly regulated industries like finance, insurance, and healthcare. Trust is critical in CX, irrespective of industry. Hence the need for responsible AI. This webinar provided some practical guidance and recognition that AI needs effective governance to ensure it doesn’t slip over to the dark side.

Balancing risk and reward and the AI dilemma

At its best, AI provides an effective sense and respond mechanism to ensure every customer interaction is relevant given their individual context. Dynamically orchestrating relevance through all customer journeys and across all devices and channels, and in real-time, is a massively complex challenge. Customer journeys are almost infinite in their variety. And often chaotic. Multiply that by the number of customers served by the company, and then do the same with variables such as their individual context, emotions, wants, and needs. Getting the response right every time is not possible without AI. Used wisely, AI delivers a more relevant and rewarding experience for each customer, generating more business income. Get this wrong, and the company’s reputation can be ruined, and the firm may attract severe financial penalties from regulators.

The immediate dark side of AI is not the apocryphal takeover of the human race by super-intelligent robots or even the workforce’s displacement. Things like bias can creep in, especially when the data used to train AI algorithms is tainted with often unintended biases. There is also the added burden of both industry regulations and those common to all, such as the EU General Data Protection Regulations (GDPR) or California Consumer Privacy Act (CCPA). It is not just the use of customer data that must be transparent, but also how AI triggered the decision – its explainability. This brings me to to the dilemma faced with AI – opaque or transparent?

The outputs of transparent AI are relatively easy to understand. The logic can be explained. However, where AI is networked, as in deep-learning, the inputs from one cluster of AI algorithms provide the input to another. What happens in between involving potentially billions of permutations is impossible to fathom. So is opaque always bad? No. It has its place, especially where the outcomes do not impact sensitive areas such as offering credit to customers. Dynamically orchestrating a response across potentially millions of simultaneous customer journeys may require the use of both transparent and opaque AI. So what advice did Pegasystems give to balance the risk/reward and square the AI dilemma circle?

Four attributes + accountability

According to Pegasystems, there are four attributes that can guide responsible AI with my comments in brackets.

  • Fairness – balanced and unbiased for all customers or customer groups.
  • Transparent – explainable to a human audience (when it has material consequences for the customer such as offering credit inappropriately or refusing it due to bias. There was some discussion on this, as it is not so clear-cut. I shall come back to this in a future blog).
  • Empathetic – safely adheres to social norms, to develop customer trust. (In short ‘walking in the customer’s shoes’).
  • Robust – hardened to real-world exposure, to prevent unplanned behavior.

These four attributes reflect the importance of ethics as part of the enterprise’s culture. While there should be someone with overall accountability for the use of AI across the enterprise, Everyone has a responsibility to treat customers fairly, transparently with empathy, and speak up if they detect bias. Pega Infinity, which is Pegasystems customer engagement platform, has tools to help, including ethical bias detection and the Pega Customer Empathy Advisor, which suggests the next best actions that will mutually benefit customers and companies. The Pega T-Switch also allows experimentation with opaque AI before activation.

There is no doubt that AI is essential in the CX domain to trigger the most relevant action at any scale. However, like all powerful tools, it must be handled with care, ethically.

Subscribe to the blog

What are you waiting for?

Get in touch today

We answer all email and requests as they come in.