
Artificial Intelligence (AI) is developing at a rapid pace. As is always the case with rapid innovation, it can be used for both evil and good. There is evidence that cybercriminals are increasingly using AI to support their criminal activities. They can carry out increasingly complex cyber attacks faster. There is no more room for human intervention to prevent these new attacks or reduce their impact. Instead, cybersecurity professionals themselves must rely on AI that can act autonomously as their assistant to protect their company's data.
In the first two months of 2023 alone, when ChatGPT emerged, we saw a 135 percent increase in social engineering attacks such as phishing. What was striking was that the language used in those phishing emails was a lot more sophisticated and to-the-point: smooth, correct sentences instead of misspellings. Therefore, it has become much more difficult for people to distinguish these phishing emails from real emails. In the future, hackers will increasingly use artificial intelligence and machine learning. Think of malicious algorithms that can adapt, learn and continuously improve themselves in order to circumvent companies' cyber security.
AI used for criminal purposes cannot be stopped by humans alone and will have to be countered by "good" AI. This is likely to change the role of human security analyst as it currently exists in many companies. In the future, security teams will no longer have time to sift through logs of historical events or implement rules in their firewalls. There is simply too much data, too many threats and too many incidents to manage with humans alone. AI can do this heavy lifting and protect a company autonomously, making it the essential partner security teams so desperately need.