Artificial Intelligence
March 8, 2023
November 18, 2024

ChatGPT, Algorithms, and Ethical AI

AI tools like ChatGPT may seem reliable, but the Cupertino effect shows how algorithmic bias can mislead us - especially when it comes to regulatory compliance.

ChatGPT, Algorithms, and Ethical AI

Interview multiple candidates

Lorem ipsum dolor sit amet, consectetur adipiscing elit proin mi pellentesque  lorem turpis feugiat non sed sed sed aliquam lectus sodales gravida turpis maassa odio faucibus accumsan turpis nulla tellus purus ut   cursus lorem  in pellentesque risus turpis eget quam eu nunc sed diam.

Search for the right experience

Lorem ipsum dolor sit amet, consectetur adipiscing elit proin mi pellentesque  lorem turpis feugiat non sed sed sed aliquam lectus sodales gravida turpis maassa odio.

  1. Lorem ipsum dolor sit amet, consectetur adipiscing elit.
  2. Porttitor nibh est vulputate vitae sem vitae.
  3. Netus vestibulum dignissim scelerisque vitae.
  4. Amet tellus nisl risus lorem vulputate velit eget.

Ask for past work examples & results

Lorem ipsum dolor sit amet, consectetur adipiscing elit consectetur in proin mattis enim posuere maecenas non magna mauris, feugiat montes, porttitor eget nulla id id.

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit.
  • Netus vestibulum dignissim scelerisque vitae.
  • Porttitor nibh est vulputate vitae sem vitae.
  • Amet tellus nisl risus lorem vulputate velit eget.
Vet candidates & ask for past references before hiring

Lorem ipsum dolor sit amet, consectetur adipiscing elit ut suspendisse convallis enim tincidunt nunc condimentum facilisi accumsan tempor donec dolor malesuada vestibulum in sed sed morbi accumsan tristique turpis vivamus non velit euismod.

“Lorem ipsum dolor sit amet, consectetur adipiscing elit nunc gravida purus urna, ipsum eu morbi in enim”
Once you hire them, give them access for all tools & resources for success

Lorem ipsum dolor sit amet, consectetur adipiscing elit ut suspendisse convallis enim tincidunt nunc condimentum facilisi accumsan tempor donec dolor malesuada vestibulum in sed sed morbi accumsan tristique turpis vivamus non velit euismod.

ChatGPT artificial intelligence caused a flurry of handwringing, which lasted all of a fortnight before we all settled into using it. Much has been written about the bias that might be inherent in the tool, but not much has been said about the inevitable Cupertino effect it’s already having.

What is the Cupertino effect?

The Cupertino effect is named for an early autocorrect issue, which changed ‘cooperation’ (without the hyphen) to ‘Cupertino’ (a city which is a metonym for Apple Inc, as its corporate HQ in California). This resulted in many embarrassing misprints in official documents, such as "South Asian Association for Regional Cupertino". Why did the algorithm do this, instead of just correct to ‘co-operation’, if it needed to correct at all? We don’t know, and it doesn’t matter – the insidious effect was that people were being told that ‘cooperation’ was not a word. It is a word. But when 'computer says no', you are going to doubt yourself, and just use ‘co-operation’ instead.

A real risk of ChatGPT and other AI that generates outputs is that we take it on faith. If it tells us something is wrong, we are going to believe it. This is why it’s so essential not to use algorithmic AI for regulatory compliance. When we are deciding what law to apply, we need to be able to understand and challenge the evidence behind that decision.

What does AI bias look like for regulatory applications?

In 2020 a Federal government agency implemented an AI EDRMS using a combined rules tree with a machine learning model. In this model, if a record could not be categorized by a rule, the machine learning model classified the record based on its contents. The statistical model was developed by taking a set of records that first had to be manually classified and then applying Natural Language Processing techniques to normalize the document content into vectors. The model was then trained using algorithms. This process required significant input from the records managers, both to classify the training dataset, design (and then maintain) the rules engine, and then to train the AI.

Generally, Supervised Machine Learning requires a minimum of 1,000 good examples for each rule. So, to train an ML the records team needs to find and curate 1,000 examples at least for each function and activity pair in the records authority (of which there are usually hundreds). They then need to ‘supervise’ the learning – confirming yes or no to each suggested match. The process is very time consuming for records teams, especially when the organisation has hundreds of separate rules, and it also requires significant and ongoing effort from the vendor to build and refine. The implementation reported 80% accuracy, which is not within the expected reliability for AI classification, which aims for 90%. It was unable to classify based on context as well as content. It also only indicated a 5% productivity gain to the organisation.

But most problematically, the outcomes of this Supervised Machine Learning, algorithmic approach were not traceable. The organisation had to trust that the decision reached by the AI was correct, based on the training data they had provided and then curated. How do we know that training data was good? How do we know the human decisions that were baked into the system at the start were accurate? And how will we know that the decisions stay broadly accurate, even after the machine starts 'learning' on its own? With an algorithmic model, we risk encoding bias or legacy assumptions, and projecting them as irrefutable truth into the future. The machine tells us the 'answer', and it's easier just to believe it's right, and we are wrong, because the AI told us so.

How can we avoid biased Artificial Intelligence?

This is why we use Rules as Code AI for automating governance and compliance in enterprises. It is transparent, and traceable, which are vital principles for ethical AI. We can’t let AI systems determine right and wrong inside a black box, in case they are wrong (and we blindly believe them).

We designed our AI based on the OECD ethical AI Principles of Transparency and Explainability. These principles require that AI systems have 'plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision'. These rules are there to protect stakeholders -- people who are (sometimes adversely) affected by the outcome of an AI process.

Castlepoint's Rules as Code is completely transparent. It shows exactly why a record was classified, and sentenced, in the way that it was. It provides all the information required about the record, its contents, and the actions on it over time to make an informed, evidence-based decision about its handling. And it does this automatically, without requiring the records teams to write rules, make file plans, or supervise and train the engine.

Contact us to find out more about how this capability works for your industry or role, and to see the transparency in action.