09/28/2023

Robots: Cybercriminals of the Future?

Cybercriminals of the Future? Bots & Botnets

Although the first AI-generated cyber attack has yet to be officially documented, artificial intelligence is expected to have an impact in security. In fact, AI is already being used by cyber criminals, for example to fine-tune phishing emails. The almost uncanny ability of programs such as ChatGPT to generate convincingly human-sounding text undoubtedly helps here. And new threats are likely to come at us even faster thanks to AI. How can we arm ourselves against this?

To answer that question, we must first determine the current state of affairs regarding AI and cybercrime. Because although no full AI-generated cyber attack has taken place yet – at least to our knowledge – the danger of prompt injection attacks, for example, is already recognized. With this, Large Language Models (LLMs) are 'coaxed' through certain questions to take actions for which the model was not developed.

Credit card information

For example, a German researcher managed to turn Bing Chat into a social engineer that searches for and exfiltrates personal information without raisind suspicion. The user doesn't have to ask about the website or do anything except interact with Bing Chat while the website is open in the browser. The chatbot is then instructed to ask for the personal data and credit card details of the website visitor.
Though also not entirely new, since banking trojans have done the same for years, prompt injection is a serious security threat that needs to be addressed more firmly as more models are deployed to new cases and interface with more systems. But the importance of security boundaries between trusted and untrusted inputs for LLMs is currently too often underestimated – if it is even mentioned at all.

The manufacturers of AI solutions have an important role to play in this. Fortunately, we see that many manufacturers are already taking measures to contain the dangers. For example, Microsoft blocks attempts at prompt injection and OpenAI, the company behind ChatGPT, also does a lot to protect data.
 

Own responsibility

But it is not like only AI manufacturers are in a position to protect against possible AI cybercrime. Companies and end users employing those technologies can also take measures. At the moment, this mainly revolves around preparing for possible variants of cybercrime methods that are already known. After all, no one can look into the future and predict with absolute certainty how cybercrime will develop, not even AI – although one can venture a few educated guesses. At the moment, the danger lies mainly in enhanced phishing attacks and other attempts at data theft.

But the principle on which LLM and AI float does offer tools for responsible handling. AI does not invent anything itself, but relies on data sources. It is therefore wise to think carefully about which data you share with whom (or what). You shouldn't make AI smarter than it is. In other words: don't give an AI service on a website all the information about yourself or your company, because you don't know whether that can be abused at a later stage. Speaking of handing over data: You might in some cases want to carefully read the EULA and privacy policies of any new software or service you use – oftentimes they allow the maker of the software or service to use any input to “improve services” – and with respect to AI technologies this might include permission to use any  and all input as training data for AI systems. 

Security awareness training should therefore pay attention to this aspect. Employees must not only be alert to phishing emails, but also be trained in the responsible use of AI tools. Security awareness also revolves around our own behavior online and therefore about what information we share and place online. Cybercrime is not a passive, one-sided game where the attacker is in control. Your own role must also be critically examined in order to reduce risks.
 


Eddy Willems

Eddy Willems

Security Evangelist


Share Article