Re:Adapt Data Science Insights

Part 1: Customer-Facing AI - Why 95% Fail Before They Start

17 October 2025
Forward to a friend and
subscribe to our regular newsletter with our latest insights.

A recent and widely circulated report1 from the team at NANDA2 has highlighted the enormous challenge of successfully productionising AI-driven applications, claiming their research revealed an eye-watering failure rate of 95%. Such failures are not typically driven by lack of technical expertise in getting any particular system deployed. Rather, they stem from organisations automating work they do not fully understand, implementing solutions before clarifying what problem they are actually solving, or rushing to deploy AI without examining why customers contact them in the first place.

Whilst some questions may exist as to the authors' impartiality (toward the end of the article they promote their own market offering as a potential solution to the troubles they identify), in our opinion they do raise a key point that is often forgotten in the rush for AI dividends: understanding your customers. This article explores why customer-facing AI systems so often fail and what organisations must do differently to succeed.

Are You Solving Problems or Creating Them?

Before jumping to an AI solution, we need to think about customers first and foremost. Is your AI application solving an acute problem for them? More importantly, are you creating the problem your AI is trying to solve?

Many customer contacts exist because of confusing policies, unclear documentation, or fragmented processes. Organisations often automate responses to issues they should not be creating in the first place. A McKinsey study3 found 65% of "routine" orders required manual intervention - these were not customer problems; they were system design problems.

Productionising AI in these circumstances will not generate traction and engagement. In fact, it may generate further problems4 as customers try to work around the new system or even worse lead to reputational damage and embarrassment. There are famous examples of rogue chatbots5, 6, 7 that have damaged brand reputation and customer trust.

The Reality of AI Customer Service

Rushing to use AI as a replacement for humans for customer service is clearly fraught with risk, with some companies reversing course on seeing how AI performs outside of development environments, such as Commonwealth Bank who rehired an entire department8. Customers are clearly not enthusiastic about many of the AI systems they must interact with day to day9, 10.

The Commonwealth Bank example shows this perfectly - they automated without understanding what the work actually involved and had to about turn. The message is clear: deploy customer-facing AI without proper understanding, and you risk not just project failure but genuine damage to customer relationships.

Understanding What Customers Really Want

So how can you know what customers want? You may need to ask yourselves some challenging questions:

  1. Does my NPS score tell the full story?
  2. How do customers interact with our organisation?
  3. What are they really asking for?

Dig deeper into why customers contact you. Are they asking for what your organisation exists to provide, or are they calling because something went wrong - a confusing letter, contradictory information, or a process that does not work?

One study found that analysing customer contact reasons revealed most inquiries resulted from the organisation's own policies and systems, not genuine service requests. Fix these root causes rather than automating responses to problems you are creating.

Using AI to Understand Customers (Not Replace Humans)

The good news for many companies is that they have collected a trove of valuable data in the form of emails and call logs. There is nothing like reading conversations between customers and agents to understand demand. However, the sheer volume of data can be overwhelming.

Fortunately, here is an application where the careful implementation of AI, considering the unique nature of your organisation's data and function, can enable analysis of customer interactions at scale. This provides a baseline view of how your customers interact with your organisation today and informs on what can be improved.

Repeating and refining the process over time can reveal whether that tech improvement really did fix the issue customers were reporting. This does not require an online bot to field queries as they come in but does involve real humans using modern tools to truly understand their organisation.

This analysis may lead to insights such as the need for better (or more accessible) documentation, highlight conflicting company policies, or expose deficiencies in existing apps or processes. When such issues have been addressed, the impact on service levels can be measured and a virtuous circle of improvement implemented.

Part of this process may show that an appropriately designed AI system has a clear use case to improve service, such as by reducing wait times for simple queries. Such validated and evidence-based applications are far more likely to succeed in production than a drive to automate purely to reduce a cost line on a profit and loss statement.

The Critical Question

Before implementing any AI solution, organisations must understand their current processes and where they create unnecessary contacts. Start by asking: "What proportion of customer contacts would not exist if our systems, policies, and processes worked properly?"

This simple question can be revelatory. If the answer is "most of them," then no amount of sophisticated AI will create a successful customer-facing system. You will simply be automating dysfunction.

A Framework for Customer-Facing AI Success

The organisations succeeding with customer-facing AI share a common pattern:

  1. Understand your purpose from the customer's perspective. What are customers actually trying to accomplish when they contact you? What outcomes do they seek?
  2. Identify where your own systems create problems. Map the customer journey end-to-end and identify every point where confusion, delay, or rework occurs because of your policies, processes, or systems.
  3. Fix root causes first. Many problems are better solved by clarifying documentation, aligning policies, or redesigning processes. AI is a tool, not a strategy.
  4. Design for human-AI collaboration. When AI is appropriate, design systems where technology augments human judgment rather than replacing it entirely. Customers should always have a clear path to human support when needed.
  5. Measure outcomes, not activities. Success means problems solved and customer needs met, not just faster response times or reduced call volumes.
  6. Test rigorously before scaling. Use small-scale pilots with clear outcome metrics. Compare: Does this solve the customer's actual problem? Does it reduce the issues that cause customers to contact us repeatedly? Does it handle the complexity and exceptions that exist in real interactions?

Always measure the AI output against current practice in your organisation and test across a range of conceivable inputs (not just those encountered naturally during development). When you feel ready to scale, employ strategies such as canary releases or rolling deployment with A/B testing to monitor uptake and performance.


Conclusion

Successful customer-facing AI implementation begins not with technology selection but with understanding your customers and your organisation as a system. The vast majority of AI failures stem from automating before understanding - creating sophisticated solutions to the wrong problems or, worse, automating responses to problems the organisation itself creates.

Start by understanding what proportion of customer contacts result from your own system issues. Fix those root causes. Then, and only then, determine whether AI offers a genuine solution to remaining customer needs. The organisations succeeding with customer-facing AI aren't those with the most sophisticated models, but those who understand their customers best and use technology purposefully to improve outcomes.

In Part 2 of this series, we will explore how these same principles apply to AI tools designed for internal business users, and why understanding how work actually happens is just as critical as understanding customer needs.


References

1.      https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
2.      https://nanda.media.mit.edu/
3.      https://www.mckinsey.com/capabilities/operations/our-insights/better-together-process-and-task-mining-a-powerful-ai-combo
4.      https://www.paddleyourownkanoo.com/2025/04/07/the-british-airways-customer-service-chatbot-is-so-bad-it-doesnt-even-know-where-the-airline-is-based/
5.      https://www.bbc.co.uk/news/technology-68025677
6.      https://www.cxtoday.com/ai/3-times-customer-chatbots-went-rogue-and-the-lessons-we-need-to-learn/
7.      https://x.com/ChrisJBakke/status/1736533308849443121
8.      https://www.techradar.com/pro/now-thats-an-embarassing-u-turn-bank-forced-to-rehire-human-workers-after-their-ai-replacement-fail-to-perform
9.      https://www.techbusinessnews.com.au/blog/why-customers-dont-like-or-hate-chatbots-bad-for-business/
10.  https://www.techbusinessnews.com.au/news/consumers-are-fed-up-with-ai-chatbot-and-automated-email-responses/

Forward to a friend and subscribe to our regular newsletter with our latest insights.

- Thomas Masters is a Director of Re:Adapt Data Science who is our Chief Data Scientist, helping business leaders create value with their data.

Photo of Jason Frank, Managing Director, Re:Adapt Data Science Limited

- Jason Frank is the Managing Director of Re:Adapt Data Science who has a passion for re-thinking how we manage and leverage data to make better decisions.