This website uses cookies

Read our Privacy policy and Terms of use for more information.

Artificial intelligence is reshaping customer experience at speed. From chatbots handling thousands of queries simultaneously to sentiment analysis tools flagging at-risk customers before they churn, AI now sits at the front line of how organisations serve people. But as adoption accelerates, regulation is following close behind. The EU AI Act is the world's first comprehensive legal framework for artificial intelligence, and many businesses have filed it under legal and compliance. Customer experience leaders would be wise to pull it back out.

What is the EU AI Act?

The EU AI Act was published in the Official Journal of the European Union on 12 July 2024 and entered into force on 1 August 2024, according to the Act's official implementation timeline. Its obligations are phased in gradually. The prohibitions on certain AI practices began to apply on 2 February 2025, while the broader framework governing high-risk AI systems, governance and penalties takes full effect from 2 August 2026.

The Act applies to organisations operating in or serving EU markets, including those headquartered elsewhere. Its architecture is risk-based. As the Future of Privacy Forum explains in its analysis of the Act's prohibited practices, the framework takes a tiered approach based on the severity of risk: unacceptable risk, high risk, limited risk, and minimal or no risk. Crucially, the FPF notes that the Act prohibits specific practices involving AI systems, not the technologies themselves.

Why Customer Experience Leaders Should Care?

CX leaders may not write the compliance reports, but they own the layer where AI becomes visible to customers. When a chatbot gives a misleading answer, when a recommendation engine surfaces something a customer finds intrusive, or when a voice system makes a decision a customer cannot understand or challenge, the reputational consequences land with the brand, not the vendor.

AI is now embedded across the entire service journey: chatbots and voicebots managing first contact, agent assist tools shaping adviser responses, predictive models determining which customers receive proactive outreach, and workforce optimisation platforms reshaping contact centre operations. Poorly governed AI in any of these areas does not just create regulatory exposure. It damages trust, and trust is the currency customer experience runs on.

Which CX Use Cases Could Be Affected?

Not every AI tool deployed in a contact centre will fall into the high-risk category, but some use cases warrant close attention. Writing in the International Association of Privacy Professionals, solicitor Richard Lawne outlines how the Act classifies biometric AI systems, and the implications are directly relevant to CX teams.

On emotion recognition, the IAPP analysis draws a clear and significant line. Using voice analysis to gauge customer sentiment during support calls would be classified as high-risk under the Act. Using the same technology to monitor employee emotions in that same context, however, would be prohibited entirely. Emotion recognition systems in workplaces and educational settings are banned except where strictly necessary for medical or safety purposes.

When it comes to biometric categorisation, the Act prohibits systems that categorise individuals according to characteristics including race, political opinions, trade union membership, religion and sexual orientation. Systems involving other sensitive characteristics are classified as high-risk and carry equivalent compliance obligations.

Post-interaction biometric identification, the analysis of biometric data after initial capture, is permitted but classified as high-risk. Real-time remote biometric identification for law enforcement is prohibited with narrow exceptions.

Beyond biometrics, other CX-relevant areas likely to warrant scrutiny include AI used in credit, insurance or eligibility journeys, automated routing and prioritisation logic that determines which customers receive which quality of service, and personalisation engines built on detailed behavioural profiling.

Three Big Changes CX Leaders Should Prepare For

Transparency is among the most immediately tangible shifts. The Act introduces disclosure requirements for AI systems interacting directly with people, meaning organisations that have deployed chatbots or virtual agents without clearly identifying them as AI may need to revisit how those tools are presented.

Documentation and governance will also move up the agenda. According to the IAPP analysis, providers of high-risk systems must implement extensive safeguards including risk management procedures, data governance controls, post-market monitoring, logging mechanisms and human oversight. For many organisations where AI tools have been adopted team by team and without centralised oversight, this requirement may expose significant governance gaps — a pattern the FPF describes as a broader challenge in how the Act's enforcement architecture is structured.

Human oversight is the third area of consequence. Where AI contributes to high-impact decisions, the Act is clear that meaningful human review must remain part of the process. Fully automated decisions that significantly affect customers, with no avenue for human intervention or appeal, are likely to face the greatest scrutiny. Designing workflows that keep humans meaningfully in the loop is no longer just a best practice — under the Act, it is increasingly a compliance requirement.

The stakes of non-compliance are substantial. As the FPF analysis notes, organisations found to be engaging in prohibited AI practices can face fines of up to 35 million euros or seven per cent of total worldwide annual turnover, whichever is higher.

The Opportunity Hidden Inside Regulation

There is a tendency to interpret new regulations as friction. In this case, it also brings with it a business opportunity. Trust is increasingly a competitive differentiator in customer experience, and organisations that can demonstrate their AI is fair, transparent and accountable will carry an advantage with customers and enterprise buyers alike.

Better governance also tends to improve model quality. Clearer documentation, more rigorous monitoring and stronger cross-functional oversight are not just compliance outputs. They are conditions that make AI perform better and fail more gracefully. Organisations that have already built the foundations of responsible AI governance in CX will find compliance a shorter journey than those starting from scratch. The winners in AI-powered CX may not be the fastest adopters, but the most trusted ones.

Five Actions CX Leaders Can Take Now

  1. Audit where AI currently touches the customer journey, including tools embedded inside licensed platforms you may not have specifically procured.

  2. Map your vendor and supply chain dependencies, and establish what AI is running inside third-party products.

  3. Review the transparency and consent language customers encounter at AI-driven touchpoints, particularly where no disclosure currently exists.

  4. Establish cross-functional AI governance that connects CX with legal, IT and compliance before August 2026 obligations take full effect.

  5. Build a responsible AI roadmap that ties governance milestones to CX outcomes, not solely to legal deadlines. The FPF's implementation timeline provides a practical framework for sequencing that work.

What Comes Next?

The full weight of the EU AI Act lands for most high-risk AI systems in August 2026, with a further tranche of obligations following in August 2027. For customer experience leaders, that window is shorter than it may appear. Building a CX AI roadmap that accounts for regulatory milestones alongside commercial goals is fast becoming a strategic necessity, not an optional exercise.

The businesses that prepare now will not simply be compliant. They will be better placed to earn the trust of customers who are paying increasingly close attention to how AI is being used in the services they rely on.

 

Keep Reading