Responsible AI – What is it, and why it’s important to CX?

AI helps scale relevance by triggering the best contextual response

Over the last few years, AI in the form of machine learning (ML) and natural language processing (NLP) has been one of CX-tech’s most ubiquitous developments. It crops up almost everywhere. Product recommendations in marketing and eCommerce systems, sentiment analysis to detect a customer’s mood, and pipeline management to help the sales force focus on opportunities that are most likely to close. These relatively benign examples offer little cause for concern and, when well-executed, generate an uplift in sales and faster resolution of customer queries. All good so far.

However, there is a potentially darker side to AI and how customers and employees perceive it.

The sudden rise in the use of generative AI, such as ChatGPT, has caused considerable concern as this new machine learning capability can generate content such as reports, images, and videos without human intervention that appear convincing. The specter of clever scams and disinformation fuels fears that humanity will lose control of intelligent bots, though much of this fear is misplaced. There is, however, a need for caution. AI also has substantial risks, especially in highly regulated industries like finance, insurance, and healthcare. The explainability of AI-generated recommendations is essential in these industries. Trust is critical in CX, irrespective of industry. Hence the need for responsible AI. 

Balancing Risk and Reward and the AI Dilemma

At its best, AI provides an effective sense and response mechanism to ensure every customer interaction is relevant given their particular context. Dynamically orchestrating relevance through all customer journeys and across all devices and channels, and in real-time, is a massively complex challenge. Customer journeys are almost infinite in their variety. And often chaotic. Multiply that by the number of customers served by the company, and then do the same with variables such as their individual context, emotions, wants, and needs. Getting the response right almost every time is not possible without AI. Used wisely, AI delivers a more relevant and rewarding experience for each customer, generating more business income. Get this wrong, and the company’s reputation can be ruined, and the firm may attract severe financial penalties from regulators.

The immediate dark side of AI is not the apocryphal takeover of the human race by super-intelligent robots or even the workforce’s displacement. Things like bias can creep in, especially when the data used to train AI algorithms is tainted with often unintended biases. There is also the added burden of both industry regulations and those common to all, such as the EU General Data Protection Regulations (GDPR) or California Consumer Privacy Act (CCPA). It is not just the use of customer data that must be transparent, but also how AI triggered the decision – its explainability. This brings me to to the dilemma faced with AI – opaque or transparent?

The outputs of transparent AI are relatively easy to understand. The logic can be explained. However, where AI is networked, as in deep learning, the inputs from one cluster of AI algorithms provide the input to another. What happens in between involving potentially billions of permutations is impossible to fathom. So is opaque always bad? No. It has its place, especially where the outcomes do not impact sensitive areas such as offering credit to customers. Dynamically orchestrating a response across potentially millions of simultaneous customer journeys may require the use of both transparent and opaque AI. So how to square the AI dilemma circle?

Four attributes + accountability

There are four attributes that can guide responsible AI.

  • Fairness – balanced and unbiased for all customers or customer groups.
  • Transparent – explainable to a human audience (when it has material consequences for the customer, such as offering credit inappropriately or refusing it due to bias. 
  • Empathetic – safely adheres to social norms to develop customer trust. In short, ‘walking in the customer’s shoes’.
  • Robust – hardened to real-world exposure to prevent unexpected behavior.

These four attributes reflect the importance of ethics as part of the enterprise’s culture. While there should be someone with overall accountability for the use of AI across the enterprise, Everyone has a responsibility to treat customers fairly, transparently with empathy, and speak up if they detect bias. Some customer engagement platforms have tools to help, including ethical bias detection and the Pega Customer Empathy Advisor, which suggests the next best actions that will mutually benefit customers and companies. The Pega T-Switch also allows experimentation with opaque AI before activation.

Adobe Experience Cloud recently added Adobe Firefly, its generative AI capability to create content in a variety of forms. The ML algorithms are trained on the Adobe Stock dataset, and by using a few words, it will turn concepts into visual art forms. With a human-in-the-loop, the marketer can decide which options to approve. This minimizes the risk of generating inappropriate content and accelerates its production. 

Undoubtedly AI is essential in the CX domain to trigger the most relevant action at scale. However, like all powerful tools, it must be handled with care and ethically.

Subscribe to the blog

What are you waiting for?

Get in touch today

We answer all email and requests as they come in.