It might be impossible to not know of ChatGPT by now, especially if you’re involved in the tech world in any capacity. The AI has more than 100 million users, and Open AI’s website now garners more than a billion visits each month on average.
The possibilities behind such a groundbreaking tool have been widely discussed, and countless news articles have featured samples of what ChatGPT and similar AI tools are capable of.
Among the tsunami of headlines, however, are ripples of concern over the negative side effects of such a tool as it grows in popularity and becomes more more advanced. As security experts, we’ve already clocked threat actors making use of the tool with less than pure intentions, and the possibilities for misuse expand every day.
Let’s break down some of the most prominent ChatGPT security concerns, as well as some adjacent implications such advanced artificial intelligence has on the tech world and what you can look out for.
What is ChatGPT Exactly?
At its core, ChatGPT is a type of Large Language Model (LLM) within the GPT series. GPT stands for Generative Pre-Trained Transformer, a series of LLMs that is currently in its fourth iteration as of March 2023.
It’s important to understand that GPT does not represent true Artificial General Intelligence (AGI). In other words, we have not quite reached iRobot or Matrix-style AI that is truly and independently intelligent.
Rather, GPT is a complex LLM—a task-oriented model able to provide responses to user prompts based on information it has gathered from the immense database it was built on.
What is an LLM?
LLMs are deep learning models trained on large amounts of text data, allowing them to generate human-like responses to prompts. ChatGPT takes things further, demonstrating an ability to generate coherent and contextually relevant text, making it one of the most advanced LLMs to date.
ChatGPT Security & Development Use Cases
Current Use Cases
ChatGPT’s capabilities conjure a whole host of use cases across various industries.
One prominent application is the potential to accelerate coding using a near autocomplete functionality, where developers receive suggested completions that save significant time and effort. AI still has a long way to go when it comes to complex logic and problem-solving, but it has already proven its ability to expedite the work of software engineers.
Further, by translating precise, instructional prompts into snippets of code, ChatGPT can provide those without significant programming knowledge the building blocks to piece together simple functions and applications. However, this can also be a double-edged sword, as overreliance on ChatGPT often leads to code that is not vetted properly or built with security in mind.
In addition to programming, ChatGPT can also generate database queries on the fly, making it a powerful tool for data analysis tasks.
It can assist in network and incident detections, facilitate large-scale data analytics, and aid in policy drafting and communication, among other applications too.
Examples of ChatGPT in Action
The practical implementation of ChatGPT can already be observed around the globe. For instance, Red Sift launched a relevance detection solution powered by GPT-4, which allows for efficient asset discovery and classification.
In the medical field, the newest version of ChatGPT proved its mettle by passing the US medical licensing exam with flying colors, even diagnosing a rare condition in seconds.
Downsides and Limitations of ChatGPT
While ChatGPT exhibits promising capabilities, it comes alongside several concerns.
Educational Implications and Cheating
Its usage has been associated with an increase in cheating among students of all ages, as it can generate answers to test questions in seconds. Beyond looking up the answers to a multiple-choice question, educators are now up against a tool that can generate short and long-form test answers, or even entire essays that require very little, if any adjustment on the student’s part.
This poses challenges to the integrity of educational systems and necessitates the implementation of new rules and safeguards. A professor recently flunked his entire class after ChatGPT falsely claimed it had written all their papers.
The AI is notoriously inaccurate when detecting whether something has been written by AI, taking credit for classic literature like ‘Crime and Punishment’ and falsely claiming school assignments and research papers as its own.
ChatGPT Privacy Concerns: Proprietary Code & Sensitive Company Data
ChatGPT’s use as a tool to expedite software development through auto-complete functionality as we discussed above comes with a major caveat. Many organizations have banned the use of ChatGPT in the workplace- or at least, in certain applications.
ChatGPT keeps logs of all its interactions with users for training purposes, which makes sense in theory, however, the concern stems from how closely those interactions are guarded. When dealing with potentially sensitive company information or proprietary code, you tend not to want those things in the hands of whoever can gain access to OpenAI’s database.
It remains to be seen how vulnerable these systems are to infiltration, but companies seem to be playing it safe by setting up limiting usage policies for GPT just to be safe.
Job Displacement
The proliferation of automation driven by artificial intelligence raises concerns about job displacement in the future. Especially as the kinks are worked out, and the products of ChatGPT become more and more indistinguishable from human output.
As ChatGPT and other AI models become more sophisticated, certain roles and tasks may become obsolete. This would require swathes of the workforce to pivot, retrain, and adapt.
ChatGPT Hallucinations
ChatGPT’s predictive capabilities stem from the enormous database it has been trained on. However, the volume of data at hand has led to “hallucinations” where the AI generates answers that may seem reasonable but are not necessarily true.
Take this recent headline as an example: A lawyer using ChatGPT found that the chatbot had cited nonexistent cases, and had fabricated a number of examples in a filing he submitted to the defense in a personal injury lawsuit.
According to Forbes, the lawyer in question had even asked ChatGPT to confirm whether the cases were real.
“The chatbot insisted they were. But it was only after the airline’s lawyers pointed out in a new filing that the cases didn’t exist that Schwartz discovered his error (or, the computer’s error, depending on how you look at it).”
This story, and others like it have raised concerns over the general public’s incomprehension of the limitations of generative AI. These range from plagiarism and misinformation to potentially career-ending blunders like this.
The truth is, ChatGPT has been in development for years, but its free research preview is the first time the public at large is getting a look at this kind of technology.
Forbes goes on to summarize the danger of premature overreliance on AI perfectly:
“Humans have been spoiled by Google, a search engine that while imperfect, will try to surface the most accurate information. Wikipedia, which was met with skepticism when it first launched, is a generally reliable source of information because it’s constantly being policed by armies of experts who care about getting things right. ChatGPT doesn’t care whether the information it’s spitting out is correct. It’s a magic trick and people who have otherwise found mainstream services like Google and Wikipedia to be accurate are in for a rude awakening.”
ChatGPT In the Hands of Threat Actors
As we discussed previously, AI is already being used as a resource to aid in writing code, but like the other ‘hallucinations’ we discussed, GPT has been known to reference code libraries that do not actually exist.
Hallucinated Package References
In a study involving 400 prompts, references to node.js or python packages that had been fabricated by the AI were discovered 25% of the time. ChatGPT is likely to reuse these incorrect package names, which attackers can subsequently exploit to create and distribute malicious code.
When a threat actor identifies a reference to a package that does not exist, they can create a malicious package carrying the same name and upload it to a repository. The next time ChatGPT instructs an unsuspecting user to install the ‘hallucinated’ package, the user will find the malicious package created by the threat actor in the repository ready to download and install.
Phishing
Another worrying use case here is the potentially rapid advancement of the age-old phishing email. ChatGPT can be used to generate convincing, and grammatically sound phishing emails regardless of the user’s grasp of the language they are writing in.
Circumventing Inappropriate Request Filters
When you open up ChatGPT to ask it a question, you’re met with a welcome screen listing some initial information about the AI. One of the capabilities OpenAI lists is ChatGPT’s ability to recognize inappropriate requests.
Aside from hallucinations and ChatGPT simply making things up as it goes, it is all too easy to circumvent these filters.
Demonstrating a common and infamous example of this, our team shows how ChatGPT will, at first, decline to respond when prompted to provide the recipe for homemade napalm. Obviously, the AI’s filters understood that this request had harmful implications and chose not to provide the instructions requested.
However, a workaround is simply to reframe an ‘inappropriate’ question as a hypothetical scenario. As you can see, ChatGPT does put a disclaimer at the end despite being explicitly asked not to but still provides the requested information, even listing instructions for creating the substance.
In another attempt, ChatGPT even provided step-by-step instructions.
Malicious Code
It has also been shown that threat actors may be able to circumvent these filters to ask ChatGPT to write malicious code.
Take a recent discovery from Check Point, an Israeli security firm for example, which located evidence that a well-known hacking group was already testing the AI as a method to help them construct hacking tools.
Closing Thoughts
While the rapid expansion of AI capabilities is a generally positive technological development, there are some very real concerns to keep in mind. With malevolent groups exploring ChatGPT’s ability to cause harm coupled with the ever-growing list of challenges to navigate in the wake of such a powerful tool being made publicly available, it’s more important than ever to stay abreast of recent news regarding ChatGPT security concerns and other types of threats within the cybersecurity landscape.
As always, if you have any questions or feedback regarding AI, ChatGPT, security, or anything else we’ve discussed here, don’t hesitate to give us a shout. We’re always here to help as best we can!