Artificial Errors: Why AI Gets It Wrong?

Artificial Errors: Why AI Gets It Wrong?

February 2023 | Digital Transformation

It's OK for AI to get it wrong occasionally. As Artificial Intelligence enters the mainstream, we must accept that the computer isn't always right.

It's been a long time coming. AI has been the future of the technology industry for years. Every major software announcement for the past decade has seemingly touted new AI or machine learning features. Then on release, those new AI automations are ignored by the majority of users.

That's because AI has rarely been the main feature of any customer-facing application. Everyone knows about the mysterious algorithms that underpin social networks and search engines. These are the most prominent examples of AI affecting people's experience of the world wide web. They're also wildly unpopular among both users and regulators, contributing to a general distrust about the benefits of AI in web applications.

Measuring Confidence

Only in the sphere of data management has a widespread reliance on AI and Machine learning been normalised. Enterprise scale data normalisation and data cleansing services are expected to incorporate machine learning in order to increase the number of records that can be processed by their models.

There are several key reasons for this. Perhaps the most important is that data scientists inherently grasp one key concept that the general public have only just noticed. Namely, that the computer isn't always right. AI models of all sizes are often wrong, and that isn't a bug. It's a feature.

In the realm of data analysis, any form of fuzzy matching or machine learning model comes with a confidence score indicating the estimated accuracy of the output. The data analysts using the model will then decide the threshold below which potential matches are rejected as inaccurate. The AI is expected to be wrong, and there are processes in place to deal with this.

Captcha form validation works on the same principle, providing a match score indicating how likely a particular form submission is to be spam. The typical recommendation is to reject anything with a score of above 0.7. This ensures legitimate form submissions are accepted but does allow some spam through too. That's why no form spam solution is perfect. It's very difficult to tell the difference between a high quality spam submission and a poor quality legitimate form submission.

Beyond the Binary

ChatGPT is far more advanced than the machine learning models used for reCAPTCHA or data normalisation. However, the same fundamental rule applies. Artificial intelligence models are designed to move computing beyond the binary into subjective problems with no right or wrong answer. Whether the technology is used for writing essays or cleaning databases, AI will sometimes get facts wrong. In that respect, it is no different from human intelligence.

Much of the commentary around ChatGPT has focused on its ability to replace search engines. It even panicked Google into prematurely launching their competing Bard service last week. However, ChatGPT is pitched as a chatbot rather than a search engine. It is not designed to provide answers to factual questions. It can do that, but as with human intelligence the answers are occasionally inaccurate.

Instead, the revolutionary aspect of ChatGPT is its ability to write prose that sounds natural to a human. It's an excellent tool for producing a blog outline or a potential sales pitch. It can even write code in a pinch. However, anything it produces still needs to be fact checked and edited, just like the copy produced by a human copywriter.

Microsoft acknowledged this when launching its new Bing with ChatGPT service last week. The AI powered version of Bing is still in its pilot stages but displays ChatGPT responses and conventional search results side by side so that users can fact check the answers produced by the AI. They clearly see ChatGPT as an extension of the traditional Bing search rather than a replacement for it.


Like any intelligence, AI can learn from its mistakes. No doubt, the accuracy of ChatGPT can be improved over time. That is one benefit of AI. It can be highly specialised to the requirements of one specific task in a way that no human ever can. How Bing and ChatGPT evolve will depend on the needs of their users. Other AI services will be launched to fill the gaps left behind.

In order to win the trust of sceptical users, vendors will need to be clear on the benefits and limitations of their AI models. We're still very early in the AI hype cycle, and people are still working out the best use cases for the technology. However, as the adoption of AI accelerates, a balance will need to be struck between the ambitions of technology firms and the needs of consumers and businesses. Not all AI products will succeed. Plenty will, though, and it is essential that those AI which do gain traction are adapted to the needs of society as a whole.

Banner Photo by Rock'n Roll Monkey / Unsplash

Written by
Marketing Operations Consultant and Solutions Architect at CRMT Digital specialising in marketing technology architecture. Advisor on marketing effectiveness and martech optimisation.