ChatGPT: A Jester Among Knights? A Look at Its Cybersecurity Capabilities

BluShark Media
3 min readJul 21, 2023

--

The Jokers New Clothes ©Suprasia

Imagine stepping into a medieval world. There, amidst the colorful courtiers and gallant knights, you spot a jester. This jester, ChatGPT, developed by OpenAI, is not your run-of-the-mill jester, though. He’s a charismatic storyteller, an artful linguist, and a seemingly wise sage. But the question we find ourselves asking is, can this jester also be a knight in the realm of cybersecurity?

ChatGPT, in the empire of artificial intelligence, has made a name for itself as an accomplished storyteller, spinning tales, and engaging in discussions that would make even a philosopher take notice. But when it comes to being a knight, defending its digital kingdom from cybersecurity threats, can it hold its own?

Our jester isn’t designed to be a knight. He’s an AI language model, more scholar than warrior, well-versed in tales, not combat. The pen — or the digital equivalent — can indeed be mightier than the sword, but how mighty is our jester when it comes to cybersecurity?

ChatGPT, in its defense, boasts an armor forged from its data handling strategy. It does not store personal data from its conversations unless expressly given by users for feedback. It wears an invisibility cloak of anonymity, an elusive target for the shadows that lurk in the cyber world. Moreover, it does not have a curious nature, thereby avoiding risky situations by not asking for sensitive personal information.

But, as pointed out by Immunefi, a web security company, the jester’s performance is far from knight-worthy. In the latest report, Immunefi reveals that although about 76% of white hat researchers — those scanning for system weaknesses — use ChatGPT regularly, around 64% find it wanting in terms of accuracy in identifying security vulnerabilities. Approximately 61% felt that it lacked specialized knowledge for identifying exploits that hackers can use.

The architects behind ChatGPT aren’t idle bystanders watching the jester take on roles he isn’t trained for. As diligent blacksmiths, they are continually refining the AI’s capabilities and have even set up a ‘round table’ of reviewers to assess model outputs for a variety of inputs.

Immunefi’s findings cast light on the need for more specialized training for ChatGPT in diagnosing cyber threats. The AI, as it currently stands, lacks the specific knowledge necessary for an effective audit. However, the silver lining is that there is potential. As Immunefi’s communications lead, Jonah Michaels, suggests, there may be a day when ChatGPT, armed with the right training and datasets, can reliably execute these tasks.

So, is ChatGPT good at cybersecurity? As it stands, our digital jester is more of a promising squire than a full-fledged knight. Its developers, the blacksmiths of this tale, are tirelessly working to improve its capabilities against the ever-evolving threats of the cyber world.

In the end, the tale of ChatGPT serves as an important narrative in the ongoing saga of AI and cybersecurity. It’s a tale that teaches us that while the shield of anonymity and discretion is crucial, it is not enough. To be a true knight, the AI must learn to wield the sword of specialized knowledge and precision — a feat that, while challenging, may not be impossible for our endearing jester.

BluShark Media is more than happy to walk with you together on this journey into web3 and AI. Follow our Twitter and Threads for more!

Remember nothing mentioned here is financial advice.

--

--

BluShark Media
BluShark Media

Written by BluShark Media

BluShark Media is your trusted guide for web3, blockchain, NFTs, gaming & AI in the transformative digital world.

No responses yet