ChatGPT – What could possibly go wrong?

A guide of strategic questions for organizations to consider before adoption

ChatGPT has taken the world by storm showcasing what can be achieved with AI today if you have access to enough resources.

And while it’s undeniably fascinating (and even exciting) to see the seemingly endless examples online of how the technology can be used to carry out common work-related tasks, it’s perhaps time to sober up from the hype and look beneath the surface a little deeper, because there are some important questions for businesses to consider before jumping on the ChatGPT bandwagon.

Revolutionary technology or big risk?

Imagine having an assistant that has read and remembered everything there is to read on the internet and thus, can answer any question you can think of. For example, it can tell you about events from history, or it can write programs (or fix your excel formulas), or it can write just about anything you ask it to. Sounds amazing, right?

But what if that same assistant is doing anything to give you an impressive-sounding answer, no matter what? Even when there is no clear answer or when it doesn’t know what it’s talking about? Imagine the assistant just making up a good-sounding answer rather than saying it doesn’t know. Imagine the assistant having all that knowledge but not actually understanding anything of it nor having any sense of ethics and morals. Would you hire that assistant?


Somewhere along those lines is where you can find the incredibly hyped service ChatGPT today, which represents one of the most significant shifts in technology in years. It has put AI on everyone’s mind and not a day goes by without increasingly more impressive examples on social media of how you can use the technology.


ChatGPT is undeniably a significant leap forward, and it’s a fantastic example of what can be done with machine learning today if you have enough resources. But there are also some issues and limitations to how it can be used safely.


In this regard, a lot has been said already about the potential that the technology has to be used by ‘bad actors’ to cause harm (even OpenAI themselves published a report about the emerging threats and potential mitigations of the technology). It’s great to see such a strong dialog around this already, and we’re looking forward to the technological and regulatory safeguards to come out of this.

“We believe that it is critical to analyze the threat of AI-enabled influence operations and outline steps that can be taken before language models are used for influence operations at scale.”

But frankly, in this sense, ChatGPT is not unlike any other new revolutionary technology that we have invented. Nuclear power, space exploration, nanotechnology, gene editing, and even the internet (and many more technologies) all have the potential to be misused. This doesn’t mean that we should discard them. It just means that we need to be careful and make sure to have the necessary safeguards to prevent and dissuade people from misusing them. We support ChatGPT following the same path, meaning that the technology and its capabilities will inevitably become part of our daily life sooner than later.


There are, however, additional risks to consider that go beyond the potential of misusing the technology, especially for organizations that are considering adopting it in their operation. These issues are not talked about to the same extent, and when reading posts on social media and in articles, there are a lot of misunderstandings floating around. So, let’s try to shine some light on some of those aspects.

Keeping it real

One of the most important things to understand regarding ChatGPT and other similar solutions is what they are trained to do, which says a lot about its nature.


Under the hood, ChatGPT is an extension of GPT-3, another one of OpenAI’s large language models that uses AI to produce human-like text, which was trained to predict what the next token (roughly speaking, a piece of a word or sentence) should be. It tries to answer the question: “What is the most likely next word in this sequence? “. The model is rewarded if it generates a probable and real-looking answer, and asked to try again if this isn’t the case. (How this process works in detail is, unfortunately, beyond the scope of this article). And this is where the key to understanding ChatGPT ‘s limitations lies:

ChatGPT generates the most real-looking answer, not the most correct one.

In many cases, of course, the most likely or most real-looking answer is a correct one, but not always. The model is not even (in its purest form) trying to be correct. It’s just trying to make it seem like a real conversation. This is not an error in the model or something that is fixed by more data. It is in the nature on how the model is trained, it is a language model that has learned to generate human-like text, not an encyclopedia that will give you, or even try to give you the most correct answer. Being partly trained by humans who manually ranked responses have an indirect effect on prioritizing correct answers, as those are probably ranked higher by the users giving feedback to the model, but it is not a part of what the model is being trained to achieve.

No rose without a thorn

Knowing what ChatGPT is trying to do at its core allows us to reflect on how to use it. For example, you should not ask ChatGPT for health advice, especially not without double-checking the answer against something official. But you don’t have to resort to something as sensitive as medical applications to find questionable or precarious use cases. Who is responsible if a press release generated by ChatGPT for your company contains false claims? If you use ChatGPT to generate code for your operation, will your developers still be able to maintain it when it’s, to a larger and larger extent, being auto-generated?


Another aspect to consider is the ethical question, or rather the lack of it. Language models mimic human-written text, with all its inherent biases and pre-conceived ideas. The model is in the end only trying to generate text and does not have any sense of right and wrong. While this issue in no way is unique to these kinds of models or technology it is important to remember that it is there. It will generate biased or even racist claims for certain prompts. While filters are added which to a large extent can mitigate this problem, it is still there in the core of the model, the same prejudices and biases that exist everywhere in written text, and without the understanding of right and wrong the model itself does not know that it is doing something wrong when, for example, writing an algorithm that for instance discriminates based on race or gender.


There are more things to consider. In terms of customer-facing applications, you should first assess the risk and the potential impact on your brand if the model would return biased or discriminatory replies to your customer. The impact would be on your brand, while you have little or no control over what the model outputs. You might also need to consider the risk and impact of the model returns inaccurate answers that could negatively impact your customer.


Something else to keep in mind is that interacting with ChatGPT means sending your data directly to OpenAI:

  • Are you building critical infrastructure with your code? Then maybe you don’t want your developers to send your code to an external vendor and ask for them to improve it.
  • Are you handling private information about your customers and, if so, how do you make sure that that information is not sent to an external vendor by mistake?


This is of course not something new, but what is new is the ease with which users can ask ChatGPT questions that invertedly could contain or reveal sensitive information or create operational risks for an organization by over-relying on auto-generated content. It is important to remind yourself what is going on behind the scenes when using ChatGPT and do the same risk assessments as you would when sending your data to any other third party.


One last thing that is an open question is who owns the content created by ChatGPT. If the model created parts or most of the code of one of your applications, is that code base yours (keep in mind that it wasn’t your expertise that went into it)? What implications would there be for you if OpenAI decides that you need to license the code at some point instead? Could you still use all the things that ChatGPT put out even after you stop paying for the subscription? Most of these questions will be addressed in commercial agreements, but how those will look in the future, we don’t really know yet, leaving the organization open to the whims of one company.

Here to help

ChatGPT is an amazing technology and used right it will have a huge impact on a lot of different business cases. It can improve the efficiency of your developers, and it can improve the output of your marketing departments. If you keep in mind that it is a service that is trying to make the replies look realistic, not necessarily correct or ethical, you should have a good rule of thumb to assess whether ChatGPT is right for you.


That said, if you are thinking about how your organization can use ChatGPT or other generative models, we strongly advise that you undertake a risk assessment of the application and the model provider before deciding. Our team of advisors ensures that you receive the most accurate information to make this decision by researching and qualifying your use case and model provider and making recommendations based on your needs.


Get in touch via to chat about how we can help you or reach out to the authors Emanuel Johansson or Reynaldo Boulogne.