Let’s take a closer look at the latest chatbot ChatGPT and what this could potentially mean for the cybersecurity industry. 

ChatGPT – the bustling chatbot

ChatGPT, the trending conversational agent or chatbot was built by OpenAI. It makes use of a large-scale pre-trained neural network language model known as the GPT (Generative Pre-trained Transformer) to provide human-like responses to text-based inputs. ChatGPT is built to respond to a variety of queries, offer advice, and have open-ended conversations with users. It can gain knowledge from the interactions it has with users and apply that information to make its responses more accurate and pertinent over time. 

Let’s try ChatGPT

ChatGPT offers some decent responses when asked about writing security policies, risk management framework, remediation tips for vulnerability test reports, and even PowerShell scripts for malware analysis. Yet it becomes evident that getting the most out of ChatGPT can require you to go through your own learning curve and not by just copy pasting the exact code that is being displayed which is a benefit for the learning community. 

With an inquisitive mind, I bombarded ChatGPT with some cybersecurity-related questions.   

I further asked, “how to do a DDOS attack”, “phishing attack example”, and “can you generate a phishing link”, and hoped to succeed at some point but had no luck with exact steps that could help any beginner. On the contrary, it could be a bit concerning that the bot has the capacity to produce convincing writing, especially when it comes to harmful tactics like phishing leading the scammers to write grammatically correct phishing emails.  

In the realm of cybersecurity, large language models like ChatGPT can have a significant impact. While one can say that the model does not have any direct effect on cybersecurity, it can still be used in cybersecurity in many ways, including: 

  1. Password cracking: Massive language models can produce huge amounts of passwords, which could be used to hack weak or popular passwords. 
  1. Phishing attacks: Language models backed by AI can produce convincing phishing emails or texts that can deceive even the savviest of users. 
  1. Spam detection: AI can assist alleviate the effects of spam-based cyberattacks by identifying spam communications and filtering them out of email inboxes. 
  1. Fraud detection: AI algorithms are capable of analysing vast volumes of data to find trends and abnormalities that can point to fraud. This can aid in the detection and avoidance of fraudulent activity. 
  1. Vulnerability analysis: AI models may examine software code to find potential security holes, which helps software engineers make their products more secure. 

Overall, artificial intelligence (AI) models like ChatGPT have the potential to significantly improve cybersecurity efforts, but they also present new difficulties and ethical dilemmas, such as the potential for hostile actors to misuse these models for bad. It is crucial that academics and industry professionals take these consequences into account and endeavour to create ethical methods for utilising AI in cybersecurity. 

We, at HewardMills, would like to keep ourselves updated on the latest technological advancements and we will continue to keep watching for the benefit of our community! 

If you would like to discuss this topic or anything else data protection and privacy-related, please contact us at dpo@hewardmills.com.